Sample records for value-based performance measures

  1. Chief Complaint-Based Performance Measures: A New Focus For Acute Care Quality Measurement

    PubMed Central

    Griffey, Richard T; Pines, Jesse M.; Farley, Heather L.; Phelan, Michael P; Beach, Christopher; Schuur, Jeremiah D; Venkatesh, Arjun K.

    2014-01-01

    Performance measures are increasingly important to guide meaningful quality improvement efforts and value-based reimbursement. Populations included in most current hospital performance measures are defined by recorded diagnoses using International Disease Classification (ICD)-9 codes in administrative claims data. While the diagnosis-centric approach allows the assessment of disease-specific quality, it fails to measure one of the primary functions of emergency department (ED) care which involves diagnosing, risk-stratifying, and treating patients’ potentially life-threatening conditions based on symptoms (i.e. chief complaints). In this paper we propose chief complaint-based quality measures as a means to enhance the evaluation of quality and value in emergency care. We discuss the potential benefits of chief-complaint based measures, describe opportunities to mitigate challenges, propose an example measure set, and present several recommendations to advance this paradigm in ED-based performance measurement. PMID:25443989

  2. Performance-based measures and behavioral ratings of executive function in diagnosing attention-deficit/hyperactivity disorder in children.

    PubMed

    Tan, Alexander; Delgaty, Lauren; Steward, Kayla; Bunner, Melissa

    2018-04-16

    Deficits in real-world executive functioning (EF) are a frequent characteristic of attention-deficit/hyperactivity disorder (ADHD). However, the predictive value of using performance-based and behavioral rating measures of EF when diagnosing ADHD remains unclear. The current study investigates the use of performance-based EF measures and a parent-report questionnaire with established ecological validity and clinical utility when diagnosing ADHD. Participants included 21 healthy controls, 21 ADHD-primary inattentive, and 21 ADHD-combined type subjects aged 6-15 years. A brief neuropsychological battery was administered to each subject including common EF assessment measures. Significant differences were not found between groups on most performance-based EF measures, whereas significant differences (p < 0.05) were found on most parent-report behavioral rating scales. Furthermore, performance-based measures did not predict group membership above chance levels. Results further support differences in predictive value of EF performance-based measures compared to parent-report questionnaires when diagnosing ADHD. Further research must investigate the relationship between performance-based and behavioral rating measures when assessing EF in ADHD.

  3. Examining the Perceived Value of Integration of Earned Value Management with Risk Management-Based Performance Measurement Baseline

    ERIC Educational Resources Information Center

    Shah, Akhtar H.

    2014-01-01

    Many projects fail despite the use of evidence-based project management practices such as Performance Measurement Baseline (PMB), Earned Value Management (EVM) and Risk Management (RM). Although previous researchers have found that integrated project management techniques could be more valuable than the same techniques used by themselves, these…

  4. Comparison of measurement- and proxy-based Vs30 values in California

    USGS Publications Warehouse

    Yong, Alan K.

    2016-01-01

    This study was prompted by the recent availability of a significant amount of openly accessible measured VS30 values and the desire to investigate the trend of using proxy-based models to predict VS30 in the absence of measurements. Comparisons between measured and model-based values were performed. The measured data included 503 VS30 values collected from various projects for 482 seismographic station sites in California. Six proxy-based models—employing geologic mapping, topographic slope, and terrain classification—were also considered. Included was a new terrain class model based on the Yong et al. (2012) approach but recalibrated with updated measured VS30 values. Using the measured VS30 data as the metric for performance, the predictive capabilities of the six models were determined to be statistically indistinguishable. This study also found three models that tend to underpredict VS30 at lower velocities (NEHRP Site Classes D–E) and overpredict at higher velocities (Site Classes B–C).

  5. Measuring Value Added in Higher Education: A Proposed Methodology for Developing a Performance Indicator Based on the Economic Value Added to Graduates

    ERIC Educational Resources Information Center

    Rodgers, Timothy

    2007-01-01

    The 2003 UK higher education White Paper suggested that the sector needed to re-examine the potential of the value added concept. This paper describes a possible methodology for developing a performance indicator based on the economic value added to graduates. The paper examines how an entry-quality-adjusted measure of a graduate's…

  6. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  7. Linking Quality and Spending to Measure Value for People with Serious Illness.

    PubMed

    Ryan, Andrew M; Rodgers, Phillip E

    2018-03-01

    Healthcare payment is rapidly evolving to reward value by measuring and paying for quality and spending performance. Rewarding value for the care of seriously ill patients presents unique challenges. To evaluate the state of current efforts to measure and reward value for the care of seriously ill patients. We performed a PubMed search of articles related to (1) measures of spending for people with serious illness and (2) linking spending and quality measures and rewarding performance for the care of people with serious illness. We limited our search to U.S.-based studies published in English between January 1, 1960, and March 31, 2017. We supplemented this search by identifying public programs and other known initiatives that linked quality and spending for the seriously ill and extracted key program elements. Our search related to linking spending and quality measures and rewarding performance for the care of people with serious illness yielded 277 articles. We identified three current public programs that currently link measures of quality and spending-or are likely to within the next few years-the Oncology Care Model; the Comprehensive End-Stage Renal Disease Model; and Home Health Value-Based Purchasing. Models that link quality and spending consist of four core components: (1) measuring quality, (2) measuring spending, (3) the payment adjustment model, and (4) the linking/incentive model. We found that current efforts to reward value for seriously ill patients are targeted for specific patient populations, do not broadly encourage the use of palliative care, and have not closely aligned quality and spending measures related to palliative care. We develop recommendations for policymakers and stakeholders about how measures of spending and quality can be balanced in value-based payment programs.

  8. Linking Quality and Spending to Measure Value for People with Serious Illness

    PubMed Central

    Rodgers, Phillip E.

    2018-01-01

    Abstract Background: Healthcare payment is rapidly evolving to reward value by measuring and paying for quality and spending performance. Rewarding value for the care of seriously ill patients presents unique challenges. Objective: To evaluate the state of current efforts to measure and reward value for the care of seriously ill patients. Design: We performed a PubMed search of articles related to (1) measures of spending for people with serious illness and (2) linking spending and quality measures and rewarding performance for the care of people with serious illness. We limited our search to U.S.-based studies published in English between January 1, 1960, and March 31, 2017. We supplemented this search by identifying public programs and other known initiatives that linked quality and spending for the seriously ill and extracted key program elements. Results: Our search related to linking spending and quality measures and rewarding performance for the care of people with serious illness yielded 277 articles. We identified three current public programs that currently link measures of quality and spending—or are likely to within the next few years—the Oncology Care Model; the Comprehensive End-Stage Renal Disease Model; and Home Health Value-Based Purchasing. Models that link quality and spending consist of four core components: (1) measuring quality, (2) measuring spending, (3) the payment adjustment model, and (4) the linking/incentive model. We found that current efforts to reward value for seriously ill patients are targeted for specific patient populations, do not broadly encourage the use of palliative care, and have not closely aligned quality and spending measures related to palliative care. Conclusions: We develop recommendations for policymakers and stakeholders about how measures of spending and quality can be balanced in value-based payment programs. PMID:29091529

  9. Faustmann and the forestry tradition of outcome-based performance measures

    Treesearch

    Peter J. Ince

    1999-01-01

    The concept of land expectation value developed by Martin Faustmann may serve as a paradigm for outcome-based performance measures in public forest management if the concept of forest equity value is broadened to include social and environmental benefits and costs, and sustainability. However, anticipation and accurate evaluation of all benefits and costs appears to...

  10. A Research on Performance Measurement Based on Economic Valued-Added Comprehensive Scorecard

    NASA Astrophysics Data System (ADS)

    Chen, Qin; Zhang, Xiaomei

    With the development of economic, the traditional performance mainly rely on financial indicators could not satisfy the need of work. In order to make the performance measurement taking the best services for business goals, this paper proposed Economic Valued-Added Comprehensive Scorecard based on research of shortages and advantages of EVA and BSC .We used Analytic Hierarchy Process to build matrix to solve the weighting of EVA Comprehensive Scorecard. At last we could find the most influence factors for enterprise value forming the weighting.

  11. Looking Under the Streetlight? A Framework for Differentiating Performance Measures by Level of Care in a Value-Based Payment Environment

    PubMed Central

    Van Such, Monica B.; Nesse, Robert E.; Dilling, James A.; Swensen, Stephen J.; Thompson, Kristine M.; Orlowski, Janis M.; Santrach, Paula J.

    2017-01-01

    The majority of quality measures used to assess providers and hospitals are based on easily obtained data, focused on a few dimensions of quality, and developed mainly for primary/community care and population health. While this approach supports efforts focused on addressing the triple aim of health care, many current quality report cards and assessments do not reflect the breadth or complexity of many referral center practices. In this article, the authors highlight the differences between population health efforts and referral care and address issues related to value measurement and performance assessment. They discuss why measures may need to differ across the three levels of care (primary/community care, secondary care, complex care) and illustrate the need for further risk adjustment to eliminate referral bias. With continued movement toward value-based purchasing, performance measures and reimbursement schemes need to reflect the increased level of intensity required to provide complex care. The authors propose a framework to operationalize value measurement and payment for specialty care, and they make specific recommendations to improve performance measurement for complex patients. Implementing such a framework to differentiate performance measures by level of care involves coordinated efforts to change both policy and operational platforms. An essential component of this framework is a new model that defines the characteristics of patients who require complex care and standardizes metrics that incorporate those definitions. PMID:28353502

  12. Looking Under the Streetlight? A Framework for Differentiating Performance Measures by Level of Care in a Value-Based Payment Environment.

    PubMed

    Naessens, James M; Van Such, Monica B; Nesse, Robert E; Dilling, James A; Swensen, Stephen J; Thompson, Kristine M; Orlowski, Janis M; Santrach, Paula J

    2017-07-01

    The majority of quality measures used to assess providers and hospitals are based on easily obtained data, focused on a few dimensions of quality, and developed mainly for primary/community care and population health. While this approach supports efforts focused on addressing the triple aim of health care, many current quality report cards and assessments do not reflect the breadth or complexity of many referral center practices.In this article, the authors highlight the differences between population health efforts and referral care and address issues related to value measurement and performance assessment. They discuss why measures may need to differ across the three levels of care (primary/community care, secondary care, complex care) and illustrate the need for further risk adjustment to eliminate referral bias.With continued movement toward value-based purchasing, performance measures and reimbursement schemes need to reflect the increased level of intensity required to provide complex care. The authors propose a framework to operationalize value measurement and payment for specialty care, and they make specific recommendations to improve performance measurement for complex patients. Implementing such a framework to differentiate performance measures by level of care involves coordinated efforts to change both policy and operational platforms. An essential component of this framework is a new model that defines the characteristics of patients who require complex care and standardizes metrics that incorporate those definitions.

  13. Perturbing engine performance measurements to determine optimal engine control settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initialmore » value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.« less

  14. 76 FR 26489 - Medicare Program; Hospital Inpatient Value-Based Purchasing Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ...This final rule implements a Hospital Inpatient Value-Based Purchasing program (Hospital VBP program or the program) under section 1886(o) of the Social Security Act (the Act), under which value-based incentive payments will be made in a fiscal year to hospitals that meet performance standards with respect to a performance period for the fiscal year involved. The program will apply to payments for discharges occurring on or after October 1, 2012, in accordance with section 1886(o) (as added by section 3001(a) of the Patient Protection and Affordable Care Act, as amended by the Health Care and Education Reconciliation Act of 2010 (collectively known as the Affordable Care Act)). Scoring in the Hospital VBP program will be based on whether a hospital meets or exceeds the performance standards established with respect to the measures. By adopting this program, we will reward hospitals based on actual quality performance on measures, rather than simply reporting data for those measures.

  15. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  16. Value-based purchasing and hospital acquired conditions: are we seeing improvement?

    PubMed

    Spaulding, Aaron; Zhao, Mei; Haley, D Rob

    2014-12-01

    To determine if the Value-Based Purchasing Performance Scoring system correlates with hospital acquired condition quality indicators. This study utilizes the following secondary data sources: the American Hospital Association (AHA) annual survey and the Centers for Medicare and Medicaid (CMS) Value-Based Purchasing and Hospital Acquired Conditions databases. Zero-inflated negative binomial regression was used to examine the effect of CMS total performance score on counts of hospital acquired conditions. Hospital structure variables including size, ownership, teaching status, payer mix, case mix, and location were utilized as control variables. The secondary data sources were merged into a single database using Stata 10. Total performance scores, which are used to determine if hospitals should receive incentive money, do not correlate well with quality outcome in the form of hospital acquired conditions. Value-based purchasing does not appear to correlate with improved quality and patient safety as indicated by Hospital Acquired Condition (HAC) scores. This leads us to believe that either the total performance score does not measure what it should, or the quality outcome measurements do not reflect the quality of the total performance scores measure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. The current state of nursing performance measurement, public reporting, and value-based purchasing.

    PubMed

    Kurtzman, Ellen T; Dawson, Ellen M; Johnson, Jean E

    2008-08-01

    Over the last decade, there has been a substantial investment in holding health care providers accountable for the quality of care provided in hospitals and other settings of care. This investment has been realized through the proliferation of national policies that address performance measurement, public reporting, and value-based purchasing. Although nurses represent the largest segment of the health care workforce and despite their acknowledged role in patient safety and health care outcomes, they have been largely absent from policy setting in these areas. This article provides an analysis of current nursing performance measurement and public reporting initiatives and presents a summary of emerging trends in value-based purchasing, with an emphasis on activities in the United States. The article synthesizes issues of relevance to advancing the current climate for nursing quality and concludes with key issues for future policy setting.

  18. Sorghum and wheat differentially affect caecal microbiota and associated performance characteristics of meat chickens.

    PubMed

    Crisol-Martínez, Eduardo; Stanley, Dragana; Geier, Mark S; Hughes, Robert J; Moore, Robert J

    2017-01-01

    This study compared the effects of wheat- and sorghum-based diets on broiler chickens. The growth performance and caecal microbial community of chickens were measured and correlations between productivity and specific gut microbes were observed. Cobb broilers 15 days of age were individually caged and two dietary treatments were used, one with a wheat-based diet ( n  = 48) and another one with a sorghum-based diet ( n  = 48). Growth performance measurements were taken over a 10 day period and samples for microbiota analysis were taken at the end of that period. Caecal microbiota was characterised by sequencing of 16S bacterial rRNA gene amplicons. Overall, the results indicated that a sorghum-based diet produced higher apparent metabolisable energy (AME) and body-weight gain (BWG) values in chickens, compared to a wheat-based diet. Nevertheless, sorghum-fed birds had higher feed conversion ratio (FCR) values than wheat-fed birds, possibly because of some anti-nutritional factors in sorghum. Further analyses showed that caecal microbial community was significantly associated with AME values, but microbiota composition differed between dietary treatments. A number of bacteria were individually correlated with growth performance measurements. Numerous OTUs assigned to strains of Lactobacillus crispatus and Lachnospiraceae, which were prevalent in sorghum-fed chickens, were correlated with high AME and BWG values, respectively. Additionally, a number of OTUs assigned to Clostridiales that were prevalent in wheat-fed chickens were correlated with low FCR values. Overall, these results suggest that between-diet variations in growth performance were partly associated with changes in the caecal microbiota.

  19. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Sean J; Alvarez, Paul

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  20. Feasibility study of palm-based fuels for hybrid rocket motor applications

    NASA Astrophysics Data System (ADS)

    Tarmizi Ahmad, M.; Abidin, Razali; Taha, A. Latif; Anudip, Amzaryi

    2018-02-01

    This paper describes the combined analysis done in pure palm-based wax that can be used as solid fuel in a hybrid rocket engine. The measurement of pure palm wax calorific value was performed using a bomb calorimeter. An experimental rocket engine and static test stand facility were established. After initial measurement and calibration, repeated procedures were performed. Instrumentation supplies carried out allow fuel regression rate measurements, oxidizer mass flow rates and stearic acid rocket motors measurements. Similar tests are also carried out with stearate acid (from palm oil by-products) dissolved with nitrocellulose and bee solution. Calculated data and experiments show that rates and regression thrust can be achieved even in pure-tested palm-based wax. Additionally, palm-based wax is mixed with beeswax characterized by higher nominal melting temperatures to increase moisturizing points to higher temperatures without affecting regression rate values. Calorie measurements and ballistic experiments were performed on this new fuel formulation. This new formulation promises driving applications in a wide range of temperatures.

  1. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.

  2. Missing value imputation for microarray data: a comprehensive comparison study and a web tool.

    PubMed

    Chiu, Chia-Chun; Chan, Shih-Yao; Wang, Chung-Ching; Wu, Wei-Sheng

    2013-01-01

    Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses.

  3. Determining Risk of Falls in Community Dwelling Older Adults: A Systematic Review and Meta-analysis Using Posttest Probability.

    PubMed

    Lusardi, Michelle M; Fritz, Stacy; Middleton, Addie; Allison, Leslie; Wingood, Mariana; Phillips, Emma; Criss, Michelle; Verma, Sangita; Osborne, Jackie; Chui, Kevin K

    Falls and their consequences are significant concerns for older adults, caregivers, and health care providers. Identification of fall risk is crucial for appropriate referral to preventive interventions. Falls are multifactorial; no single measure is an accurate diagnostic tool. There is limited information on which history question, self-report measure, or performance-based measure, or combination of measures, best predicts future falls. First, to evaluate the predictive ability of history questions, self-report measures, and performance-based measures for assessing fall risk of community-dwelling older adults by calculating and comparing posttest probability (PoTP) values for individual test/measures. Second, to evaluate usefulness of cumulative PoTP for measures in combination. To be included, a study must have used fall status as an outcome or classification variable, have a sample size of at least 30 ambulatory community-living older adults (≥65 years), and track falls occurrence for a minimum of 6 months. Studies in acute or long-term care settings, as well as those including participants with significant cognitive or neuromuscular conditions related to increased fall risk, were excluded. Searches of Medline/PubMED and Cumulative Index of Nursing and Allied Health (CINAHL) from January 1990 through September 2013 identified 2294 abstracts concerned with fall risk assessment in community-dwelling older adults. Because the number of prospective studies of fall risk assessment was limited, retrospective studies that classified participants (faller/nonfallers) were also included. Ninety-five full-text articles met inclusion criteria; 59 contained necessary data for calculation of PoTP. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) was used to assess each study's methodological quality. Study design and QUADAS score determined the level of evidence. Data for calculation of sensitivity (Sn), specificity (Sp), likelihood ratios (LR), and PoTP values were available for 21 of 46 measures used as search terms. An additional 73 history questions, self-report measures, and performance-based measures were used in included articles; PoTP values could be calculated for 35. Evidence tables including PoTP values were constructed for 15 history questions, 15 self-report measures, and 26 performance-based measures. Recommendations for clinical practice were based on consensus. Variations in study quality, procedures, and statistical analyses challenged data extraction, interpretation, and synthesis. There was insufficient data for calculation of PoTP values for 63 of 119 tests. No single test/measure demonstrated strong PoTP values. Five history questions, 2 self-report measures, and 5 performance-based measures may have clinical usefulness in assessing risk of falling on the basis of cumulative PoTP. Berg Balance Scale score (≤50 points), Timed Up and Go times (≥12 seconds), and 5 times sit-to-stand times (≥12) seconds are currently the most evidence-supported functional measures to determine individual risk of future falls. Shortfalls identified during review will direct researchers to address knowledge gaps.

  4. Implementation of a High-Speed FPGA and DSP Based FFT Processor for Improving Strain Demodulation Performance in a Fiber-Optic-Based Sensing System

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    NASA's Aviation Safety and Security Program is pursuing research in on-board Structural Health Management (SHM) technologies for purposes of reducing or eliminating aircraft accidents due to system and component failures. Under this program, NASA Langley Research Center (LaRC) is developing a strain-based structural health-monitoring concept that incorporates a fiber optic-based measuring system for acquiring strain values. This fiber optic-based measuring system provides for the distribution of thousands of strain sensors embedded in a network of fiber optic cables. The resolution of strain value at each discrete sensor point requires a computationally demanding data reduction software process that, when hosted on a conventional processor, is not suitable for near real-time measurement. This report describes the development and integration of an alternative computing environment using dedicated computing hardware for performing the data reduction. Performance comparison between the existing and the hardware-based system is presented.

  5. Hospitals Known for Nursing Excellence Perform Better on Value Based Purchasing Measures

    PubMed Central

    Lasater, Karen B.; Germack, Hayley D.; Small, Dylan S.; McHugh, Matthew D.

    2018-01-01

    It is well-established that hospitals recognized for good nursing care – Magnet hospitals – are associated with better patient outcomes. Less is known about how Magnet hospitals compare to non-Magnets on quality measures linked to Medicare reimbursement. The purpose of this study was to determine how Magnet hospitals perform compared to matched non-Magnet hospitals on Hospital Value Based Purchasing (VBP) measures. A cross-sectional analysis of three linked data sources was performed. The sample included 3,021 non-federal acute care hospitals participating in the VBP program (323 Magnets; 2,698 non-Magnets). Propensity score matching was used to match Magnet and non-Magnet hospitals with similar hospital characteristics. After matching, linear and logistic regression models were used to examine the relationship between Magnet status and VBP performance. After matching and adjusting for hospital characteristics, Magnet recognition predicted higher scores on Total Performance (Regression Coefficient [RC] = 1.66, p < 0.05), Clinical Processes (RC = 3.85; p < 0.01), and Patient Experience (RC = 6.33; p < 0.001). The relationships between Magnet recognition and the Outcome and Efficiency domains were not statistically significant. Magnet hospitals known for nursing excellence perform better on Hospital VBP measures. As healthcare systems adapt to evolving incentives that reward value, attention to nurses at the front lines may be central to ensuring high-value care for patients. PMID:28558604

  6. Performance Measures, Benchmarking and Value.

    ERIC Educational Resources Information Center

    McGregor, Felicity

    This paper discusses performance measurement in university libraries, based on examples from the University of Wollongong (UoW) in Australia. The introduction highlights the integration of information literacy into the curriculum and the outcomes of a 1998 UoW student satisfaction survey. The first section considers performance indicators in…

  7. Measuring the value of process improvement initiatives in a preoperative assessment center using time-driven activity-based costing.

    PubMed

    French, Katy E; Albright, Heidi W; Frenzel, John C; Incalcaterra, James R; Rubio, Augustin C; Jones, Jessica F; Feeley, Thomas W

    2013-12-01

    The value and impact of process improvement initiatives are difficult to quantify. We describe the use of time-driven activity-based costing (TDABC) in a clinical setting to quantify the value of process improvements in terms of cost, time and personnel resources. Difficulty in identifying and measuring the cost savings of process improvement initiatives in a Preoperative Assessment Center (PAC). Use TDABC to measure the value of process improvement initiatives that reduce the costs of performing a preoperative assessment while maintaining the quality of the assessment. Apply the principles of TDABC in a PAC to measure the value, from baseline, of two phases of performance improvement initiatives and determine the impact of each implementation in terms of cost, time and efficiency. Through two rounds of performance improvements, we quantified an overall reduction in time spent by patient and personnel of 33% that resulted in a 46% reduction in the costs of providing care in the center. The performance improvements resulted in a 17% decrease in the total number of full time equivalents (FTE's) needed to staff the center and a 19% increase in the numbers of patients assessed in the center. Quality of care, as assessed by the rate of cancellations on the day of surgery, was not adversely impacted by the process improvements. © 2013 Published by Elsevier Inc.

  8. Determining Risk of Falls in Community Dwelling Older Adults: A Systematic Review and Meta-analysis Using Posttest Probability

    PubMed Central

    Fritz, Stacy; Middleton, Addie; Allison, Leslie; Wingood, Mariana; Phillips, Emma; Criss, Michelle; Verma, Sangita; Osborne, Jackie; Chui, Kevin K.

    2017-01-01

    Background: Falls and their consequences are significant concerns for older adults, caregivers, and health care providers. Identification of fall risk is crucial for appropriate referral to preventive interventions. Falls are multifactorial; no single measure is an accurate diagnostic tool. There is limited information on which history question, self-report measure, or performance-based measure, or combination of measures, best predicts future falls. Purpose: First, to evaluate the predictive ability of history questions, self-report measures, and performance-based measures for assessing fall risk of community-dwelling older adults by calculating and comparing posttest probability (PoTP) values for individual test/measures. Second, to evaluate usefulness of cumulative PoTP for measures in combination. Data Sources: To be included, a study must have used fall status as an outcome or classification variable, have a sample size of at least 30 ambulatory community-living older adults (≥65 years), and track falls occurrence for a minimum of 6 months. Studies in acute or long-term care settings, as well as those including participants with significant cognitive or neuromuscular conditions related to increased fall risk, were excluded. Searches of Medline/PubMED and Cumulative Index of Nursing and Allied Health (CINAHL) from January 1990 through September 2013 identified 2294 abstracts concerned with fall risk assessment in community-dwelling older adults. Study Selection: Because the number of prospective studies of fall risk assessment was limited, retrospective studies that classified participants (faller/nonfallers) were also included. Ninety-five full-text articles met inclusion criteria; 59 contained necessary data for calculation of PoTP. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) was used to assess each study's methodological quality. Data Extraction: Study design and QUADAS score determined the level of evidence. Data for calculation of sensitivity (Sn), specificity (Sp), likelihood ratios (LR), and PoTP values were available for 21 of 46 measures used as search terms. An additional 73 history questions, self-report measures, and performance-based measures were used in included articles; PoTP values could be calculated for 35. Data Synthesis: Evidence tables including PoTP values were constructed for 15 history questions, 15 self-report measures, and 26 performance-based measures. Recommendations for clinical practice were based on consensus. Limitations: Variations in study quality, procedures, and statistical analyses challenged data extraction, interpretation, and synthesis. There was insufficient data for calculation of PoTP values for 63 of 119 tests. Conclusions: No single test/measure demonstrated strong PoTP values. Five history questions, 2 self-report measures, and 5 performance-based measures may have clinical usefulness in assessing risk of falling on the basis of cumulative PoTP. Berg Balance Scale score (≤50 points), Timed Up and Go times (≥12 seconds), and 5 times sit-to-stand times (≥12) seconds are currently the most evidence-supported functional measures to determine individual risk of future falls. Shortfalls identified during review will direct researchers to address knowledge gaps. PMID:27537070

  9. The Role of Patient-Reported Outcome Measures in Value-Based Payment Reform.

    PubMed

    Squitieri, Lee; Bozic, Kevin J; Pusic, Andrea L

    2017-06-01

    The U.S. health care system is currently experiencing profound change. Pressure to improve the quality of patient care and control costs have caused a rapid shift from traditional volume-driven fee-for-service reimbursement to value-based payment models. Under the 2015 Medicare Access and Children's Health Insurance Program Reauthorization Act, providers will be evaluated on the basis of quality and cost efficiency and ultimately receive adjusted reimbursement as per their performance. Although current performance metrics do not incorporate patient-reported outcome measures (PROMs), many wonder whether and how PROMs will eventually fit into value-based payment reform. On November 17, 2016, the second annual Patient-Reported Outcomes in Healthcare Conference brought together international stakeholders across all health care disciplines to discuss the potential role of PROs in value-based health care reform. The purpose of this article was to summarize the findings from this conference in the context of recent literature and guidelines to inform implementation of PROs in value-based payment models. Recommendations for evaluating key perspectives and measurement goals are made to facilitate appropriate use of PROMs to best benefit and amplify the voice of our patients. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities for FY 2017, SNF Value-Based Purchasing Program, SNF Quality Reporting Program, and SNF Payment Models Research. Final rule.

    PubMed

    2016-08-05

    This final rule updates the payment rates used under the prospective payment system (PPS) for skilled nursing facilities (SNFs) for fiscal year (FY) 2017. In addition, it specifies a potentially preventable readmission measure for the Skilled Nursing Facility Value-Based Purchasing Program (SNF VBP), and implements requirements for that program, including performance standards, a scoring methodology, and a review and correction process for performance information to be made public, aimed at implementing value-based purchasing for SNFs. Additionally, this final rule includes additional polices and measures in the Skilled Nursing Facility Quality Reporting Program (SNF QRP). This final rule also responds to comments on the SNF Payment Models Research (PMR) project.

  11. Missing value imputation for microarray data: a comprehensive comparison study and a web tool

    PubMed Central

    2013-01-01

    Background Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. Results In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. Conclusions In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses. PMID:24565220

  12. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  13. Performance Indicators in Spine Surgery.

    PubMed

    St-Pierre, Godefroy Hardy; Yang, Michael H; Bourget-Murray, Jonathan; Thomas, Ken C; Hurlbert, Robin John; Matthes, Nikolas

    2018-02-15

    Systematic review. To elucidate how performance indicators are currently used in spine surgery. The Patient Protection and Affordable Care Act has given significant traction to the idea that healthcare must provide value to the patient through the introduction of hospital value-based purchasing. The key to implementing this new paradigm is to measure this value notably through performance indicators. MEDLINE, CINAHL Plus, EMBASE, and Google Scholar were searched for studies reporting the use of performance indicators specific to spine surgery. We followed the Prisma-P methodology for a systematic review for entries from January 1980 to July 2016. All full text articles were then reviewed to identify any measure of performance published within the article. This measure was then examined as per the three criteria of established standard, exclusion/risk adjustment, and benchmarking to determine if it constituted a performance indicator. The initial search yielded 85 results among which two relevant studies were identified. The extended search gave a total of 865 citations across databases among which 15 new articles were identified. The grey literature search provided five additional reports which in turn led to six additional articles. A total of 27 full text articles and reports were retrieved and reviewed. We were unable to identify performance indicators. The articles presenting a measure of performance were organized based on how many criteria they lacked. We further examined the next steps to be taken to craft the first performance indicator in spine surgery. The science of performance measurement applied to spine surgery is still in its infancy. Current outcome metrics used in clinical settings require refinement to become performance indicators. Current registry work is providing the necessary foundation, but requires benchmarking to truly measure performance. 1.

  14. Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT.

    PubMed

    Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel

    2010-02-01

    To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.

  15. Measuring Success in Health Care Value-Based Purchasing Programs

    PubMed Central

    Damberg, Cheryl L.; Sorbero, Melony E.; Lovejoy, Susan L.; Martsolf, Grant R.; Raaen, Laura; Mandel, Daniel

    2014-01-01

    Abstract Value-based purchasing (VBP) refers to a broad set of performance-based payment strategies that link financial incentives to health care providers' performance on a set of defined measures in an effort to achieve better value. The U.S. Department of Health and Human Services is advancing the implementation of VBP across an array of health care settings in the Medicare program in response to requirements in the 2010 Patient Protection and Affordable Care Act, and policymakers are grappling with many decisions about how best to design and implement VBP programs so that they are successful in achieving stated goals. This article summarizes the current state of knowledge about VBP based on a review of the published literature, a review of publicly available documentation from VBP programs, and discussions with an expert panel composed of VBP program sponsors, health care providers and health systems, and academic researchers with VBP evaluation expertise. Three types of VBP models were the focus of the review: (1) pay-for-performance programs, (2) accountable care organizations, and (3) bundled payment programs. The authors report on VBP program goals and what constitutes success; the evidence on the impact of these programs; factors that characterize high– and low–performing providers in VBP programs; the measures, incentive structures, and benchmarks used by VBP programs; evidence on spillover effects and unintended consequences; and gaps in the knowledge base. PMID:28083347

  16. The Effect of Self-Reported and Performance-Based Functional Impairment on Future Hospital Costs of Community-Dwelling Older Persons

    ERIC Educational Resources Information Center

    Reuben, David B.; Seeman, Teresa E.; Keeler, Emmett; Hayes, Risa P.; Bowman, Lee; Sewall, Ase; Hirsch, Susan H.; Wallace, Robert B.; Guralnik, Jack M.

    2004-01-01

    Purpose: We determined the prognostic value of self-reported and performance-based measurement of function, including functional transitions and combining different measurement approaches, on utilization. Design and Methods: Our cohort study used the 6th, 7th, and 10th waves of three sites of the Established Populations for Epidemiologic Studies…

  17. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    NASA Astrophysics Data System (ADS)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  18. Solar UV radiation exposure of seamen - Measurements, calibration and model calculations of erythemal irradiance along ship routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feister, Uwe; Meyer, Gabriele; Kirst, Ulrich

    2013-05-10

    Seamen working on vessels that go along tropical and subtropical routes are at risk to receive high doses of solar erythemal radiation. Due to small solar zenith angles and low ozone values, UV index and erythemal dose are much higher than at mid-and high latitudes. UV index values at tropical and subtropical Oceans can exceed UVI = 20, which is more than double of typical mid-latitude UV index values. Daily erythemal dose can exceed the 30-fold of typical midlatitude winter values. Measurements of erythemal exposure of different body parts on seamen have been performed along 4 routes of merchant vessels.more » The data base has been extended by two years of continuous solar irradiance measurements taken on the mast top of RV METEOR. Radiative transfer model calculations for clear sky along the ship routes have been performed that use satellite-based input for ozone and aerosols to provide maximum erythemal irradiance and dose. The whole data base is intended to be used to derive individual erythemal exposure of seamen during work-time.« less

  19. Pilot study of Iopamidol-based quantitative pH imaging on a clinical 3T MR scanner.

    PubMed

    Müller-Lutz, Anja; Khalil, Nadia; Schmitt, Benjamin; Jellus, Vladimir; Pentang, Gael; Oeltzschner, Georg; Antoch, Gerald; Lanzman, Rotem S; Wittsack, Hans-Jörg

    2014-12-01

    The objective of this study was to show the feasibility to perform Iopamidol-based pH imaging via clinical 3T magnetic resonance imaging (MRI) using chemical exchange saturation transfer (CEST) imaging with pulse train presaturation. The pulse train presaturation scheme of a CEST sequence was investigated for Iopamidol-based pH measurements using a 3T magnetic resonance (MR) scanner. The CEST sequence was applied to eight tubes filled with 100-mM Iopamidol solutions with pH values ranging from 5.6 to 7.0. Calibration curves for pH quantification were determined. The dependence of pH values on the concentration of Iopamidol was investigated. An in vivo measurement was performed in one patient who had undergone a previous contrast-enhanced computed tomography (CT) scan with Iopamidol. The pH values of urine measured with CEST MRI and with a pH meter were compared. In the measured pH range, pH imaging using CEST imaging with pulse train presaturation was possible. Dependence between the pH value and the concentration of Iopamidol was not observed. In the in vivo investigation, the pH values in the human bladder measured by the Iopamidol CEST sequence and in urine were consistent. Our study shows the feasibility of using CEST imaging with Iopamidol for quantitative pH mapping in vitro and in vivo on a 3T MR scanner.

  20. Subsonic Longitudinal Performance Coefficient Extraction from Shuttle Flight Data: an Accuracy Assessment for Determination of Data Base Updates

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Mcconnell, J. G.; Compton, H. R.

    1983-01-01

    Longitudinal performance comparisons between flight derived and predicted values are presented for the first five NASA Space Shuttle Columbia flights. Though subsonic comparisons are emphasized, comparisons during the transonic and low supersonic regions of flight are included. Computed air data information based on the remotely sensed atmospheric measurements as well as in situ Orbiter Air Data System (ADS) measurements were incorporated. Each air data source provides for comparisons versus the predicted values from the LaRC data base. Principally, L/D, C sub L, and C sub D, comparisons are presented, though some pitching moment results are included. Similarities in flight conditions and spacecraft configuration during the first five flights are discussed. Contributions from the various elements of the data base are presented and the overall differences observed between the flight and predicted values are discussed in terms of expected variations. A discussion on potential data base updates is presented based on the results from the five flights to date.

  1. Self-rated health and mortality: could clinical and performance-based measures of health and functioning explain the association?

    PubMed

    Lyyra, Tiina-Mari; Heikkinen, Eino; Lyyra, Anna-Liisa; Jylhä, Marja

    2006-01-01

    It is well established that self-rated health (SRH) predicts mortality even when other indicators of health status are taken into account. It has been suggested that SRH measures a wide array of mortality-related physiological and pathological characteristics not captured by the covariates included in the analyses. Our aim was to test this hypothesis by examining the predictive value of SRH on mortality controlling for different measurements of body structure, performance-based functioning and diagnosed diseases with a population-based, prospective study over an 18-year follow-up. Subjects consisted of 257 male residents of the city of Jyväskylä, central Finland, aged 51-55 and 71-75 years. Among the 71-75-year-olds the association between SRH and mortality was weaker over the longer compared to shorter follow-up period. In the multivariate Cox regression models with an 18-year follow-up time for middle-aged and a10-year follow-up time for older men, SRH predicted mortality even when the anthropometrics, clinical chemistry and performance-based measures of functioning were controlled for, but not when the number of chronic diseases was included. Although our results confirm the hypothesis that the predictive value of SRH can be explained by diagnosed diseases, its predictive power remained, when the clinical and performance-based measures of health and functioning were controlled.

  2. Variability in δ13C values between individual Daphnia ephippia: Implications for palaeo-studies

    NASA Astrophysics Data System (ADS)

    Schilder, Jos; van Roij, Linda; Reichart, Gert-Jan; Sluijs, Appy; Heiri, Oliver

    2018-06-01

    The stable carbon isotope ratio (δ13C value) of Daphnia spp. resting egg shells (ephippia) provides information on past changes in Daphnia diet. Measurements are typically performed on samples of ≥20 ephippia, which obscures the range of values associated with individual ephippia. Using a recently developed laser ablation-based technique, we perform multiple δ13C analyses on individual ephippia, which show a high degree of reproducibility (standard deviations 0.1-0.5‰). We further measured δ13C values of 13 ephippia from surface sediments of three Swiss lakes. In the well-oxygenated lake with low methane concentrations, δ13C values are close to values typical for algae (-31.4‰) and the range in values is relatively small (5.8‰). This variability is likely driven by seasonal (or inter-annual) variability in algae δ13C values. In two seasonally anoxic lakes with higher methane concentrations, average values were lower (-41.4 and -43.9‰, respectively) and the ranges much larger (10.7 and 20.0‰). We attribute this variability to seasonal variation in incorporation of methane-derived carbon. In one lake we identify two statistically distinct isotopic populations, which may reflect separate production peaks. The potentially large within-sample variability should be considered when interpreting small-amplitude, short-lived isotope excursions based on samples consisting of few ephippia. We show that measurements on single ephippia can be performed using laser ablation, which allows for refined assessments of past Daphnia diet and carbon cycling in lake food webs. Furthermore, our study provides a basis for similar measurements on other chitinous remains (e.g. from chironomids, bryozoans).

  3. U50: A New Metric for Measuring Assembly Output Based on Non-Overlapping, Target-Specific Contigs.

    PubMed

    Castro, Christina J; Ng, Terry Fei Fan

    2017-11-01

    Advances in next-generation sequencing technologies enable routine genome sequencing, generating millions of short reads. A crucial step for full genome analysis is the de novo assembly, and currently, performance of different assembly methods is measured by a metric called N 50 . However, the N 50 value can produce skewed, inaccurate results when complex data are analyzed, especially for viral and microbial datasets. To provide a better assessment of assembly output, we developed a new metric called U 50 . The U 50 identifies unique, target-specific contigs by using a reference genome as baseline, aiming at circumventing some limitations that are inherent to the N 50 metric. Specifically, the U 50 program removes overlapping sequence of multiple contigs by utilizing a mask array, so the performance of the assembly is only measured by unique contigs. We compared simulated and real datasets by using U 50 and N 50 , and our results demonstrated that U 50 has the following advantages over N 50 : (1) reducing erroneously large N 50 values due to a poor assembly, (2) eliminating overinflated N 50 values caused by large measurements from overlapping contigs, (3) eliminating diminished N 50 values caused by an abundance of small contigs, and (4) allowing comparisons across different platforms or samples based on the new percentage-based metric UG 50 %. The use of the U 50 metric allows for a more accurate measure of assembly performance by analyzing only the unique, non-overlapping contigs. In addition, most viral and microbial sequencing have high background noise (i.e., host and other non-targets), which contributes to having a skewed, misrepresented N 50 value-this is corrected by U 50 . Also, the UG 50 % can be used to compare assembly results from different samples or studies, the cross-comparisons of which cannot be performed with N 50 .

  4. Development, implementation, and characterization of a standalone embedded viscosity measurement system based on the impedance spectroscopy of a vibrating wire sensor

    NASA Astrophysics Data System (ADS)

    Santos, José; Janeiro, Fernando M.; Ramos, Pedro M.

    2015-10-01

    This paper presents an embedded liquid viscosity measurement system based on a vibrating wire sensor. Although multiple viscometers based on different working principles are commercially available, there is still a market demand for a dedicated measurement system capable of performing accurate, fast measurements and requiring little or no operator training for simple systems and solution monitoring. The developed embedded system is based on a vibrating wire sensor that works by measuring the impedance response of the sensor, which depends on the viscosity and density of the liquid in which the sensor is immersed. The core of the embedded system is a digital signal processor (DSP) which controls the waveform generation and acquisitions for the measurement of the impedance frequency response. The DSP also processes the acquired waveforms and estimates the liquid viscosity. The user can interact with the measurement system through a keypad and an LCD or through a computer with a USB connection for data logging and processing. The presented system is tested on a set of viscosity standards and the estimated values are compared with the standard manufacturer specified viscosity values. A stability study of the measurement system is also performed.

  5. Physician-owned Surgical Hospitals Outperform Other Hospitals in the Medicare Value-based Purchasing Program

    PubMed Central

    Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott

    2016-01-01

    Background The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals while creating financial incentives for quality improvement and fostering increased transparency. Limited information is available comparing hospital performance across healthcare business models. Study Design 2015 hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Results Of 3089 hospitals with Total Performance Scores (TPS), categories of representative healthcare business models included 104 Physician-owned Surgical Hospitals (POSH), 111 University HealthSystem Consortium (UHC), 14 US News & World Report Honor Roll (USNWR) Hospitals, 33 Kaiser Permanente, and 124 Pioneer Accountable Care Organization affiliated hospitals. Estimated mean TPS for POSH (64.4, 95% CI 61.83, 66.38) and Kaiser (60.79, 95% CI 56.56, 65.03) were significantly higher compared to all remaining hospitals while UHC members (36.8, 95% CI 34.51, 39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included POSH (mean 2.32, p<0.0001), USNWR honorees (mean 2.24, p 0.0140) and UHC members (mean =1.99, p<0.0001) while Kaiser Permanente hospitals had lower case mix value (mean =1.54, p<0.0001). Re-estimation of TPS did not change the original results after adjustment for differences in hospital case mix index. Conclusions The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals may guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. PMID:27502368

  6. Profit-Based Model Selection for Customer Retention Using Individual Customer Lifetime Values.

    PubMed

    Óskarsdóttir, María; Baesens, Bart; Vanthienen, Jan

    2018-03-01

    The goal of customer retention campaigns, by design, is to add value and enhance the operational efficiency of businesses. For organizations that strive to retain their customers in saturated, and sometimes fast moving, markets such as the telecommunication and banking industries, implementing customer churn prediction models that perform well and in accordance with the business goals is vital. The expected maximum profit (EMP) measure is tailored toward this problem by taking into account the costs and benefits of a retention campaign and estimating its worth for the organization. Unfortunately, the measure assumes fixed and equal customer lifetime value (CLV) for all customers, which has been shown to not correspond well with reality. In this article, we extend the EMP measure to take into account the variability in the lifetime values of customers, thereby basing it on individual characteristics. We demonstrate how to incorporate the heterogeneity of CLVs when CLVs are known, when their prior distribution is known, and when neither is known. By taking into account individual CLVs, our proposed approach of measuring model performance gives novel insights when deciding on a customer retention campaign. The method is dependent on the characteristics of the customer base as is compliant with modern business analytics and accommodates the data-driven culture that has manifested itself within organizations.

  7. What can the national quality forum tell us about performance measurement in anesthesiology?

    PubMed

    Hyder, Joseph A; Niconchuk, Jonathan; Glance, Laurent G; Neuman, Mark D; Cima, Robert R; Dutton, Richard P; Nguyen, Louis L; Fleisher, Lee A; Bader, Angela M

    2015-02-01

    Anesthesiologists face increasing pressure to demonstrate the value of the care they provide, whether locally or nationally through public reporting and payor requirements. In this article, we describe the current state of performance measurement in anesthesia care at the national level and highlight gaps and opportunities in performance measurement for anesthesiologists. We evaluated all endorsed performance measures in the National Quality Forum (NQF), the clearinghouse for all federal performance measures, and classified all measures as follows: (1) anesthesia-specific; (2) surgery-specific; (3) jointly attributable; or (4) other. We used NQF-provided descriptors to characterize measures in terms of (1) structure, process, outcome, or efficiency; (2) patients, disease, and events targeted; (3) procedural specialty; (4) reporting eligibility; (5) measures stewards; and (6) timing in the care stream. Of the 637 endorsed performance measures, few (6, 1.0%) were anesthesia-specific. An additional 39 measures (6.1%) were surgery-specific, and 67 others (10.5%) were jointly attributable. "Anesthesia-specific" measures addressed preoperative antibiotic timing (n = 4), normothermia (n = 1), and protocol use for the placement of central venous catheter (n = 1). Jointly attributable measures included outcome measures (n = 49/67, 73.1%), which were weighted toward mortality alone (n = 24) and cardiac surgery (n = 14). Other jointly attributable measures addressed orthopedic surgery (n = 4), general surgical oncologic resections (n = 12), or nonspecified surgeries (n = 15), but none specifically addressed anesthesia care outside the operating room such as for endoscopy. Only 4 measures were eligible for value-based purchasing. No named anesthesiology professional groups were among measure stewards, but surgical professional groups (n = 33/67, 47%) were frequent measure stewards. Few NQF performance measures are specific to anesthesia practice, and none of these appears to demonstrate the value of anesthesia care or differentiate high-quality providers. To demonstrate their role in patient-centered, outcome-driven care, anesthesiologists may consider actively partnering in jointly attributable or team-based reporting. Future measures may incorporate surgical procedures not proportionally represented, as well as procedural and sedation care provided in nonoperating room settings.

  8. Eco-efficiency in extended supply chains: a case study of furniture production.

    PubMed

    Michelsen, Ottar; Fet, Annik Magerholm; Dahlsrud, Alexander

    2006-05-01

    This paper presents a methodology about how eco-efficiency in extended supply chains (ESCs) can be understood and measured. The extended supply chain includes all processes in the life cycle of a product and the eco-efficiency is measured as the relative environmental and value performance in one ESC compared to other ESCs. The paper is based on a case study of furniture production in Norway. Nine different environmental performance indicators are identified. These are based on suggestions from the World Business Council for Sustainable Development and additional indicators that are shown to have significant impacts in the life cycle of the products. Value performance is measured as inverse life cycle costs. The eco-efficiency for six different chair models is calculated and the relative values are shown graphically in XY-diagrams. This provides information about the relative performance of the products, which is valuable in green procurement processes. The same method is also used for analysing changes in eco-efficiency when possible alterations in the ESC are introduced. Here, it is shown that a small and realistic change of end-of-life treatment significantly changes the eco-efficiency of a product.

  9. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    PubMed

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  10. Physician-Owned Surgical Hospitals Outperform Other Hospitals in Medicare Value-Based Purchasing Program.

    PubMed

    Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott

    2016-10-01

    The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals and creates financial incentives for quality improvement and fosters increased transparency. Limited information is available comparing hospital performance across health care business models. The 2015 Hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Of 3,089 hospitals with total performance scores, categories of representative health care business models included 104 physician-owned surgical hospitals, 111 University HealthSystem Consortium, 14 US News & World Report Honor Roll hospitals, 33 Kaiser Permanente, and 124 Pioneer accountable care organization affiliated hospitals. Estimated mean total performance scores for physician-owned surgical hospitals (64.4; 95% CI, 61.83-66.38) and Kaiser Permanente (60.79; 95% CI, 56.56-65.03) were significantly higher compared with all remaining hospitals, and University HealthSystem Consortium members (36.8; 95% CI, 34.51-39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included physician-owned surgical hospitals (mean 2.32; p < 0.0001), US News & World Report honorees (mean 2.24; p = 0.0140), and University HealthSystem Consortium members (mean 1.99; p < 0.0001), and Kaiser Permanente hospitals had lower case mix value (mean 1.54; p < 0.0001). Re-estimation of total performance scores did not change the original results after adjustment for differences in hospital case mix index. The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals can guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Surface micromachined MEMS deformable mirror based on hexagonal parallel-plate electrostatic actuator

    NASA Astrophysics Data System (ADS)

    Ma, Wenying; Ma, Changwei; Wang, Weimin

    2018-03-01

    Deformable mirrors (DM) based on microelectromechanical system (MEMS) technology are being applied in adaptive optics (AO) system for astronomical telescopes and human eyes more and more. In this paper a MEMS DM with hexagonal actuator is proposed and designed. The relationship between structural design and performance parameters, mainly actuator coupling, is analyzed carefully and calculated. The optimum value of actuator coupling is obtained. A 7-element DM prototype is fabricated using a commercial available standard three-layer polysilicon surface multi-user-MEMS-processes (PolyMUMPs). Some key performances, including surface figure and voltage-displacement curve, are measured through a 3D white light profiler. The measured performances are very consistent with the theoretical values. The proposed DM will benefit the miniaturization of AO systems and lower their cost.

  12. Power and sensitivity of alternative fit indices in tests of measurement invariance.

    PubMed

    Meade, Adam W; Johnson, Emily C; Braddy, Phillip W

    2008-05-01

    Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  13. Current State of Value-Based Purchasing Programs.

    PubMed

    Chee, Tingyin T; Ryan, Andrew M; Wasfy, Jason H; Borden, William B

    2016-05-31

    The US healthcare system is rapidly moving toward rewarding value. Recent legislation, such as the Affordable Care Act and the Medicare Access and CHIP Reauthorization Act, solidified the role of value-based payment in Medicare. Many private insurers are following Medicare's lead. Much of the policy attention has been on programs such as accountable care organizations and bundled payments; yet, value-based purchasing (VBP) or pay-for-performance, defined as providers being paid fee-for-service with payment adjustments up or down based on value metrics, remains a core element of value payment in Medicare Access and CHIP Reauthorization Act and will likely remain so for the foreseeable future. This review article summarizes the current state of VBP programs and provides analysis of the strengths, weaknesses, and opportunities for the future. Multiple inpatient and outpatient VBP programs have been implemented and evaluated; the impact of those programs has been marginal. Opportunities to enhance the performance of VBP programs include improving the quality measurement science, strengthening both the size and design of incentives, reducing health disparities, establishing broad outcome measurement, choosing appropriate comparison targets, and determining the optimal role of VBP relative to alternative payment models. VBP programs will play a significant role in healthcare delivery for years to come, and they serve as an opportunity for providers to build the infrastructure needed for value-oriented care. © 2016 American Heart Association, Inc.

  14. Current State of Value-Based Purchasing Programs

    PubMed Central

    Chee, Tingyin T.; Ryan, Andrew M.; Wasfy, Jason H.; Borden, William B.

    2016-01-01

    The United States healthcare system is rapidly moving toward rewarding value. Recent legislation, such as the Affordable Care Act and the Medicare Access and CHIP Reauthorization Act (MACRA), solidified the role of value-based payment in Medicare. Many private insurers are following Medicare’s lead. Much of the policy attention has been on programs such as accountable care organizations and bundled payments; yet, value-based purchasing (VBP) or pay-for-performance, defined as providers being paid fee-for-service with payment adjustments up or down based on value metrics, remains a core element of value payment in MACRA and will likely remain so for the foreseeable future. This review article summarizes the current state of VBP programs and provides analysis of the strengths, weaknesses, and opportunities for the future. Multiple inpatient and outpatient VBP programs have been implemented and evaluated, with the impact of those programs being marginal. Opportunities to enhance the performance of VBP programs include improving the quality measurement science, strengthening both the size and design of incentives, reducing health disparities, establishing broad outcome measurement, choosing appropriate comparison targets, and determining the optimal role of VBP relative to alternative payment models. VBP programs will play a significant role in healthcare delivery for years to come, and they serve as an opportunity for providers to build the infrastructure needed for value-oriented care. PMID:27245648

  15. Do dichromats see colours in this way? Assessing simulation tools without colorimetric measurements.

    PubMed

    Lillo Jover, Julio A; Álvaro Llorente, Leticia; Moreira Villegas, Humberto; Melnikova, Anna

    2016-11-01

    Simulcheck evaluates Colour Simulation Tools (CSTs, they transform colours to mimic those seen by colour vision deficients). Two CSTs (Variantor and Coblis) were used to know if the standard Simulcheck version (direct measurement based, DMB) can be substituted by another (RGB values based) not requiring sophisticated measurement instruments. Ten normal trichromats performed the two psychophysical tasks included in the Simulcheck method. The Pseudoachromatic Stimuli Identification task provided the h uv (hue angle) values of the pseudoachromatic stimuli: colours seen as red or green by normal trichromats but as grey by colour deficient people. The Minimum Achromatic Contrast task was used to compute the L R (relative luminance) values of the pseudoachromatic stimuli. Simulcheck DMB version showed that Variantor was accurate to simulate protanopia but neither Variantor nor Coblis were accurate to simulate deuteranopia. Simulcheck RGB version provided accurate h uv values, so this variable can be adequately estimated when lacking a colorimeter —an expensive and unusual apparatus—. Contrary, the inaccuracy of the L R estimations provided by Simulcheck RGB version makes it advisable to compute this variable from the measurements performed with a photometer, a cheap and easy to find apparatus.

  16. Using the New Postacute Care Quality Measures to Demonstrate the Value of Occupational Therapy.

    PubMed

    Sandhu, Sharmila; Furniss, Jeremy; Metzler, Christina

    As the health care system continues to evolve toward one based on quality not quantity, demonstrating the value of occupational therapy has never been more important. Providing high-quality services, achieving optimal outcomes, and identifying and promoting occupational therapy's distinct value are the responsibilities of all practitioners. In relation to the Improving Medicare Post-Acute Care Transformation (IMPACT) Act of 2014, the Centers for Medicare and Medicaid Services (CMS) is implementing new functional items and related outcome performance measures across postacute care (PAC) settings. Practitioners can demonstrate the role and value of occupational therapy services through their participation in data collection and the interpretation of the resulting performance measures. In this column, we review the objectives of the IMPACT Act, introduce the new self-care and mobility items and outcome performance measures being implemented in PAC settings, and describe ways to use these new data to advocate for occupational therapy. We also discuss American Occupational Therapy Association initiatives to provide materials and guidance for occupational therapy practitioners to contribute to PAC data collection. Copyright © 2018 by the American Occupational Therapy Association, Inc.

  17. An advanced look at surgical performance under Medicare's hospital-inpatient value-based purchasing program: who is winning and who is losing?

    PubMed

    Dupree, James M; Neimeyer, Jennifer; McHugh, Megan

    2014-01-01

    The Centers for Medicare and Medicaid Services (CMS) is beginning to shift from paying providers based on volume to more explicitly rewarding quality of care. The hospital value-based purchasing (VBP) program is the first in a series of mandatory programs to financially reward and penalize US hospitals based on quality measure performance. Our objective was to identify the characteristics of hospitals that perform well (and those that perform poorly) on the surgical measures in CMS' hospital VBP program. Using 2008 to 2010 performance data from CMS' Hospital Compare website and the 2009 American Hospital Association annual survey, we examined surgical measure performance for all acute care general hospitals in the US. Outcomes were determined by a composite surgical performance score indicating the percentage of eligible surgical performance points that a hospital received. There were 3,030 hospitals included in our study. Composite surgical performance scores were 15.6% lower at public hospitals than at for-profit hospitals (p < 0.01). Additionally, there were significant differences in the routes by which hospitals achieved points, with smaller hospitals, for-profit hospitals, Magnet hospitals, and NSQIP hospitals all more likely to obtain points via the achievement route. The results of our study indicate that public hospitals perform worse on the surgical measures in the hospital VBP program. This study raises important questions about the impact that this new, mandatory program will have on public hospitals, which serve an important safety-net role and appear to be disadvantaged in the hospital VBP program. This issue should continue to be investigated as these mandatory quality programs are updated in future years. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Findings and Preliminary Recommendations from the Michigan State and Indiana University Research Study of Value-Added Models to Evaluate Teacher Performance

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.

    2013-01-01

    The push for accountability in public schooling has extended to the measurement of teacher performance, accelerated by federal efforts through Race to the Top. Currently, a large number of states and districts across the country are computing measures of teacher performance based on the standardized test scores of their students and using them in…

  19. The Value of Information: Approaches in Economics, Accounting, and Management Science.

    ERIC Educational Resources Information Center

    Repo, Aatto J.

    1989-01-01

    This review and analysis of research on the economics of information performed by economists, accounting researchers, and management scientists focuses on their approaches to describing and measuring the value of information. The discussion includes comparisons of research approaches based on cost effectiveness and on the value of information. (77…

  20. Operational experience with VAWT blades. [structural performance

    NASA Technical Reports Server (NTRS)

    Sullivan, W. N.

    1979-01-01

    The structural performance of 17 meter diameter wind turbine rotors is discussed. Test results for typical steady and vibratory stress measurements are summarized along with predicted values of stress based on a quasi-static finite element model.

  1. Quality measures and pediatric radiology: suggestions for the transition to value-based payment.

    PubMed

    Heller, Richard E; Coley, Brian D; Simoneaux, Stephen F; Podberesky, Daniel J; Hernanz-Schulman, Marta; Robertson, Richard L; Donnelly, Lane F

    2017-06-01

    Recent political and economic factors have contributed to a meaningful change in the way that quality in health care, and by extension value, are viewed. While quality is often evaluated on the basis of subjective criteria, pay-for-performance programs that link reimbursement to various measures of quality require use of objective and quantifiable measures. This evolution to value-based payment was accelerated by the 2015 passage of the Medicare Access and CHIP (Children's Health Insurance Program) Reauthorization Act (MACRA). While many of the drivers of these changes are rooted in federal policy and programs such as Medicare and aimed at adult patients, the practice of pediatrics and pediatric radiology will be increasingly impacted. This article addresses issues related to the use of quantitative measures to evaluate the quality of services provided by the pediatric radiology department or sub-specialty section, particularly as seen from the viewpoint of a payer that may be considering ways to link payment to performance. The paper concludes by suggesting a metric categorization strategy to frame future work on the subject.

  2. Identification of the numerical model of FEM in reference to measurements in situ

    NASA Astrophysics Data System (ADS)

    Jukowski, Michał; Bec, Jarosław; Błazik-Borowa, Ewa

    2018-01-01

    The paper deals with the verification of various numerical models in relation to the pilot-phase measurements of a rail bridge subjected to dynamic loading. Three types of FEM models were elaborated for this purpose. Static, modal and dynamic analyses were performed. The study consisted of measuring the acceleration values of the structural components of the object at the moment of the train passing. Based on this, FFT analysis was performed, the main natural frequencies of the bridge were determined, the structural damping ratio and the dynamic amplification factor (DAF) were calculated and compared with the standard values. Calculations were made using Autodesk Simulation Multiphysics (Algor).

  3. 40 CFR 1065.205 - Performance specifications for measurement instruments.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are all determined with the same collected data, as described in § 1065.305, and based on absolute... meters to allow noise to be measured at the lowest calibrated value instead of zero flow rate. [79 FR...

  4. Evaluation and comparison of statistical methods for early temporal detection of outbreaks: A simulation-based study

    PubMed Central

    Le Strat, Yann

    2017-01-01

    The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489

  5. SU-F-T-305: Clinical Effects of Dosimetric Leaf Gap (DLG) Values Between Matched Varian Truebeam (TB) Linacs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mihailidis, D; Mallah, J; Zhu, D

    2016-06-15

    Purpose: The dosimetric leaf gap (DLG) is an important parameter to be measured for dynamic beam delivery of modern linacs, like the Varian Truebeam (TB). The clinical effects of DLG-values on IMRT and/or VMAT commissioning of two “matched” TB linacs will be presented.Methods and Materials: The DLG values on two TB linacs were measured for all energy modalities (filtered and FFF-modes) as part of the dynamic delivery mode commissioning (IMRT and/or VMAT. After the standard beam data was modeled in eclipse treatment planning system (TPS) and validated, IMRT validation was performed based on TG1191 benchmark, IROC Head-Neck (H&N) phantom andmore » sample of clinical cases, all measured on both linacs. Although there was a single-set of data entered in the TPS, a noticeable difference was observed for the DLG-values between the linacs. The TG119, IROC phantom and selected patient plans were furnished with DLG-values of TB1 for both linacs and the delivery was performed on both TB linacs for comparison. Results: The DLG values of TB1 was first used for both linacs to perform the testing comparisons. The QA comparison of TG119 plans revealed a great dependence of the results to the DLG-values used for the linac for all energy modalities studied, especially when moving from 3%/3mm to 2%/2mm γ-analysis. Conclusion: The DLG-values have a definite influence on the dynamic dose, delivery that increases with the plan complexity. We recommend that the measured DLG-values are assigned to each of the “matched” linacs, even if a single set of beam data describes multiple linacs. The user should perform a detail test of the dynamic delivery of each linac based on end-to-end benchmark suites like TG119 and IROC phantoms.1Ezzel G., et al., “IMRT commissioning: Multiple institution planning and dosimetry comparisons, a report from AAPM Task Group 119.” Med. Phys. 36:5359–5373 (2009). partly supported by CAMC Cancer Center and Alliance Oncology.« less

  6. Mathematical values in the processing of Chinese numeral classifiers and measure words.

    PubMed

    Her, One-Soon; Chen, Ying-Chun; Yen, Nai-Shing

    2017-01-01

    A numeral classifier is required between a numeral and a noun in Chinese, which comes in two varieties, sortal classifer (C) and measural classifier (M), also known as 'classifier' and 'measure word', respectively. Cs categorize objects based on semantic attributes and Cs and Ms both denote quantity in terms of mathematical values. The aim of this study was to conduct a psycholinguistic experiment to examine whether participants process C/Ms based on their mathematical values with a semantic distance comparison task, where participants judged which of the two C/M phrases was semantically closer to the target C/M. Results showed that participants performed more accurately and faster for C/Ms with fixed values than the ones with variable values. These results demonstrated that mathematical values do play an important role in the processing of C/Ms. This study may thus shed light on the influence of the linguistic system of C/Ms on magnitude cognition.

  7. Developing a GPS-based truck freight performance measure platform.

    DOT National Transportation Integrated Search

    2010-05-01

    Although trucks move the largest volume and value of goods in urban areas, relatively little is known about their travel : patterns and how the roadway network performs for trucks. Global positioning systems (GPS) used by trucking : companies to mana...

  8. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes

    PubMed Central

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    Objective To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). Materials and methods In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. Results A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. Conclusions The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. PMID:25324556

  9. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  10. Executive Function Subcomponents and their Relations to Everyday Functioning in Healthy Older Adults

    PubMed Central

    McAlister, Courtney; Schmitter-Edgecombe, Maureen

    2016-01-01

    Everyday functioning and its executive functioning cognitive correlates (i.e., switching, inhibition, and updating) were investigated in healthy older adults (HOAs) using multiple methods of functional status. In addition to whether computerized experimental tasks would better dissociate these subcomponents than neuropsychological measures of executive functioning, we were also interested in the contributions of both experimental and neuropsychological measures of executive function subcomponents to functional abilities. Seventy HOAs (45 young-old and 25 old-old) and 70 younger adults completed executive function and neuropsychological tests. In addition to self- and informant questionnaires of functional abilities, HOAs completed two performance-based measures. An aging effect was found on all executive function measures. Old-old older adults and their informants did not report more functional difficulties but demonstrated more difficulties on performance-based measures relative to young-old participants. For the HOAs, after controlling for age and education, the neuropsychological measures of executive functioning, but not experimental measures, explained a significant amount of variance in the informant-report and both performance-based measures. Updating measures differentially predicted performance-based measures, while switching was important for questionnaire and performance-based measures. The contribution of executive functioning to functional status when measured with experimental measures specifically designed to isolate the executive subcomponent was not as strong as hypothesized. Further research examining the value of isolating executive function subcomponents in neuropsychological assessment and the prediction of functional abilities in older adults is warranted. PMID:27206842

  11. Executive function subcomponents and their relations to everyday functioning in healthy older adults.

    PubMed

    McAlister, Courtney; Schmitter-Edgecombe, Maureen

    2016-10-01

    Everyday functioning and its executive functioning cognitive correlates (i.e., switching, inhibition, and updating) were investigated in healthy older adults (HOAs) using multiple methods of functional status. In addition to whether computerized experimental tasks would better dissociate these subcomponents than neuropsychological measures of executive functioning, we were also interested in the contributions of both experimental and neuropsychological measures of executive function subcomponents to functional abilities. Seventy HOAs (45 young-old and 25 old-old) and 70 younger adults completed executive function and neuropsychological tests. In addition to self- and informant questionnaires of functional abilities, HOAs completed two performance-based measures. An aging effect was found on all executive function measures. Old-old older adults and their informants did not report more functional difficulties but demonstrated more difficulties on performance-based measures than did young-old participants. For the HOAs, after controlling for age and education, the neuropsychological measures of executive functioning, but not experimental measures, explained a significant amount of variance in the informant-report and both performance-based measures. Updating measures differentially predicted performance-based measures, while switching was important for questionnaire and performance-based measures. The contribution of executive functioning to functional status when measured with experimental measures specifically designed to isolate the executive subcomponent was not as strong as hypothesized. Further research examining the value of isolating executive function subcomponents in neuropsychological assessment and the prediction of functional abilities in older adults is warranted.

  12. Measurements of the Received Signal Level and Service Coverage Area at the IEEE 802.11 Access Point in the Building

    NASA Astrophysics Data System (ADS)

    Gunantara, N.; Sudiarta, P. K.; Prasetya, AAN A. I.; Dharma, A.; Gde Antara, I. N.

    2018-04-01

    Access point (AP) is part of a Wireless Local Access Network (WLAN) with its communications using WiFi. AP is used to transmit and receive data to users/clients. The ability of AP to serve users/clients depends on many factors. Moreover, if AP is applied in conditions inside the building. In this study, AP is installed at two points inside the building and then measured in the form of the received signal level (RSL) and service coverage area. One AP measured its performance by 26 measurement points and the other AP measured its performance by 20 measurement points. When AP has measured its performance then another AP position is switched off. Based on the measurement result, the received signal level value is the highest value is about -47 dBm at a distance of 3.2 m, while the lowest is about -79 dBm at a 9.21 m because it is on barrier 2 walls. While based on service coverage area, the area which is far away from the AP then the quality of service becomes bad because the transmitted signal is weakening caused by the distance and the loss of the wall.

  13. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  14. Sensitivity analysis of bridge health index to element failure and element conditions.

    DOT National Transportation Integrated Search

    2009-11-01

    Bridge Health Index (BHI) is a bridge performance measure based on the condition of the bridge elements. It : is computed as the ratio of remaining value of the bridge structure to the initial value of the structure. Since it : is expressed as a perc...

  15. Independent Analysis of Real-Time, Measured Performance Data From Microcogenerative Fuel Cell Systems Installed in Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, Heather E.; Colella, Whitney G.

    2015-06-01

    Pacific Northwest National Laboratory (PNNL) is working with industry to independently monitor up to 15 distinct 5 kW-electric (kWe) combined heat and power (CHP) high temperature (HT) proton exchange membrane (PEM) fuel cell systems (FCSs) installed in light commercial buildings. This research paper discusses an evaluation of the first six months of measured performance data acquired at a 1 s sampling rate from real-time monitoring equipment attached to the FCSs at building sites. Engineering performance parameters are independently evaluated. Based on an analysis of the first few months of measured operating data, FCS performance is consistent with manufacturer-stated performance. Initialmore » data indicate that the FCSs have relatively stable performance and a long-term average production of about 4.57 kWe of power. This value is consistent with, but slightly below, the manufacturer's stated rated electric power output of 5 kWe. The measured system net electric efficiency has averaged 33.7%, based on the higher heating value (HHV) of natural gas fuel. This value, also, is consistent with, but slightly below, the manufacturer's stated rated electric efficiency of 36%. The FCSs provide low-grade hot water to the building at a measured average temperature of about 48.4 degrees C, lower than the manufacturer's stated maximum hot water delivery temperature of 65 degrees C. The uptime of the systems is also evaluated. System availability can be defined as the quotient of total operating time compared to time since commissioning. The average values for system availability vary between 96.1 and 97.3%, depending on the FCS evaluated in the field. Performance at rated value for electrical efficiency (PRVeff) can be defined as the quotient of the system time operating at or above the rated electric efficiency and the time since commissioning. The PRVeff varies between 5.6% and 31.6%, depending on the FCS field unit evaluated. Performance at rated value for electrical power (PRVp) can be defined as the quotient of the system time operating at or above the rated electric power and the time since commissioning. PRVp varies between 6.5% and 16.2%. Performance at rated value for electrical efficiency and power (PRVt) can be defined as the quotient of the system time operating at or above both the rated electric efficiency and the electric power output compared to the time since commissioning. PRVt varies between 0.2% and 1.4%. Optimization to determine the manufacturer rating required to achieve PRVt greater than 80% has been performed based on the collected data. For example, for FCS Unit 130 to achieve a PRVt of 95%, it would have to be down-rated to an electrical power output of 3.2 kWe and an electrical efficiency of 29%. The use of PRV as an assessment metric for FCSs has been developed and reported for the first time in this paper. For FCS Unit 130, a maximum decline in electric power output of approximately 18% was observed over a 500 h period in Jan. 2012.« less

  16. Comparison of Hybrid K-Edge Densitometer (HKED) Performance Operating with the Canberra Lynx MCA and the Canberra ICB-NIM Electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McElroy, Robert Dennis

    From the 1991 until 2008 the Canberra Hybrid K-Edge Densitometer systems were provided with ICB-NIM (Integrated Control Bus – Nuclear Instrument Module) acquisition electronics. Newer electronics modules, such as the Lynx, were not supported under the VMS based operating system. The LYNX module was provided as the standard acquisition electronics following the release of the Windows based CHKED software. This report compares the electronics dead-time, gain shifts, detector resolution and measurement performance of the HKED system operated with the two types of acquisition modules. The comparison was performed using measurements obtained with the ORNL HKED system. The original intent ofmore » this study was to take advantage of both the timing and energy outputs from the HPGE detector to acquire data with both sets of electronics in parallel. Although this approach has been applied successfully with other systems, in this case we found the timing output produced a significant amount of noise such that a comparison between the electronics would be invalid. So the comparative measurements were performed sequentially. The ICB-NIM data was acquired over the course of 12 months with 255 measurements while the LYNX data was acquired over a period of 10 months with 75 measurements. To simplify the comparison, all data used in this study was acquired using the Canberra CHKED (V1.0) software package. The performance analysis was based primarily on the peak locations, peak widths and concentration values reported by the CHKED software. The raw spectra from the XRF measurements were also examined to extract additional 109Cd peak location and width data for the hybrid measurements (the standard hybrid report template does not report these values).« less

  17. On the (In)Consistency of Citizen and Municipal Level Indicators of Social Capital and Local Government Performance

    ERIC Educational Resources Information Center

    Kampen, Jarl K.

    2010-01-01

    We study the empirical consistency of survey based (micro level) indicators of social capital and local government performance on the one, and municipality based (aggregate level) measures of these two concepts on the other hand. Knowledge about the behavior of these indicators is helpful for evaluating the value of studies carried out in isolated…

  18. Recurrence quantity analysis based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bian, Songhan; Shang, Pengjian

    2017-05-01

    Recurrence plot (RP) has turned into a powerful tool in many different sciences in the last three decades. To quantify the complexity and structure of RP, recurrence quantification analysis (RQA) has been developed based on the measures of recurrence density, diagonal lines, vertical lines and horizontal lines. This paper will study the RP based on singular value decomposition which is a new perspective of RP study. Principal singular value proportion (PSVP) will be proposed as one new RQA measure and bigger PSVP means higher complexity for one system. In contrast, smaller PSVP reflects a regular and stable system. Considering the advantage of this method in detecting the complexity and periodicity of systems, several simulation and real data experiments are chosen to examine the performance of this new RQA.

  19. A Whole-Tumor Histogram Analysis of Apparent Diffusion Coefficient Maps for Differentiating Thymic Carcinoma from Lymphoma.

    PubMed

    Zhang, Wei; Zhou, Yue; Xu, Xiao-Quan; Kong, Ling-Yan; Xu, Hai; Yu, Tong-Fu; Shi, Hai-Bin; Feng, Qing

    2018-01-01

    To assess the performance of a whole-tumor histogram analysis of apparent diffusion coefficient (ADC) maps in differentiating thymic carcinoma from lymphoma, and compare it with that of a commonly used hot-spot region-of-interest (ROI)-based ADC measurement. Diffusion weighted imaging data of 15 patients with thymic carcinoma and 13 patients with lymphoma were retrospectively collected and processed with a mono-exponential model. ADC measurements were performed by using a histogram-based and hot-spot-ROI-based approach. In the histogram-based approach, the following parameters were generated: mean ADC (ADC mean ), median ADC (ADC median ), 10th and 90th percentile of ADC (ADC 10 and ADC 90 ), kurtosis, and skewness. The difference in ADCs between thymic carcinoma and lymphoma was compared using a t test. Receiver operating characteristic analyses were conducted to determine and compare the differentiating performance of ADCs. Lymphoma demonstrated significantly lower ADC mean , ADC median , ADC 10 , ADC 90 , and hot-spot-ROI-based mean ADC than those found in thymic carcinoma (all p values < 0.05). There were no differences found in the kurtosis ( p = 0.412) and skewness ( p = 0.273). The ADC 10 demonstrated optimal differentiating performance (cut-off value, 0.403 × 10 -3 mm 2 /s; area under the receiver operating characteristic curve [AUC], 0.977; sensitivity, 92.3%; specificity, 93.3%), followed by the ADC mean , ADC median , ADC 90 , and hot-spot-ROI-based mean ADC. The AUC of ADC 10 was significantly higher than that of the hot spot ROI based ADC (0.977 vs. 0.797, p = 0.036). Compared with the commonly used hot spot ROI based ADC measurement, a histogram analysis of ADC maps can improve the differentiating performance between thymic carcinoma and lymphoma.

  20. Achieving Value in Primary Care: The Primary Care Value Model.

    PubMed

    Rollow, William; Cucchiara, Peter

    2016-03-01

    The patient-centered medical home (PCMH) model provides a compelling vision for primary care transformation, but studies of its impact have used insufficiently patient-centered metrics with inconsistent results. We propose a framework for defining patient-centered value and a new model for value-based primary care transformation: the primary care value model (PCVM). We advocate for use of patient-centered value when measuring the impact of primary care transformation, recognition, and performance-based payment; for financial support and research and development to better define primary care value-creating activities and their implementation; and for use of the model to support primary care organizations in transformation. © 2016 Annals of Family Medicine, Inc.

  1. Deodorants, value, and performance.

    PubMed

    Newcomer, L N

    1997-11-01

    For the health-care market, like the deodorant market, the message is clear: Add value or your product will not be competitive. For physicians of all specialties, the best way to add value is to measure and improve performance. Performance measurement is critical to improvement in health care. Without measurement, there can be no improvement in quality. Without improvement in quality, there is no added value. Oncologists can take at least two actions to add value for their health plans: (1) measure practice performance and demonstrate a quality improvement; and (2) become the personal-care physician for cancer patients.

  2. Performance-based tests versus behavioral ratings in the assessment of executive functioning in preschoolers: associations with ADHD symptoms and reading achievement.

    PubMed

    Miranda, Ana; Colomer, Carla; Mercader, Jessica; Fernández, M Inmaculada; Presentación, M Jesús

    2015-01-01

    The early assessment of the executive processes using ecologically valid instruments is essential for identifying deficits and planning actions to deal with possible adverse consequences. The present study has two different objectives. The first objective is to analyze the relationship between preschoolers' performance on tests of Working Memory and Inhibition and parents' and teachers' ratings of these executive functions (EFs) using the Behavior Rating Inventory of Executive Function (BRIEF). The second objective consists of studying the predictive value of the different EF measures (performance-based test and rating scales) on Inattention and Hyperactivity/Impulsivity behaviors and on indicators of word reading performance. The participants in the study were 209 children in the last year of preschool, their teachers and their families. Performance-based tests of Working Memory and Inhibition were administered, as well as word reading measures (accuracy and speed). The parents and teachers filled out rating scales of the EF and typical behaviors of attention deficit hyperactivity disorder (ADHD) symptomatology. Moderate correlation values were found between the different EF assessments procedures, although the results varied depending on the different domains. Metacognition Index from the BRIEF presented stronger correlations with verbal working memory tests than with inhibition tests. Both the rating scales and the performance-based tests were significant predictors of Inattention and Hyperactivity/Impulsivity behaviors and the reading achievement measures. However, the BRIEF explained a greater percentage of variance in the case of the ADHD symptomatology, while the performance-based tests explained reading achievement to a greater degree. The implications of the findings for research and clinical practice are discussed.

  3. Performance-based tests versus behavioral ratings in the assessment of executive functioning in preschoolers: associations with ADHD symptoms and reading achievement

    PubMed Central

    Miranda, Ana; Colomer, Carla; Mercader, Jessica; Fernández, M. Inmaculada; Presentación, M. Jesús

    2015-01-01

    The early assessment of the executive processes using ecologically valid instruments is essential for identifying deficits and planning actions to deal with possible adverse consequences. The present study has two different objectives. The first objective is to analyze the relationship between preschoolers’ performance on tests of Working Memory and Inhibition and parents’ and teachers’ ratings of these executive functions (EFs) using the Behavior Rating Inventory of Executive Function (BRIEF). The second objective consists of studying the predictive value of the different EF measures (performance-based test and rating scales) on Inattention and Hyperactivity/Impulsivity behaviors and on indicators of word reading performance. The participants in the study were 209 children in the last year of preschool, their teachers and their families. Performance-based tests of Working Memory and Inhibition were administered, as well as word reading measures (accuracy and speed). The parents and teachers filled out rating scales of the EF and typical behaviors of attention deficit hyperactivity disorder (ADHD) symptomatology. Moderate correlation values were found between the different EF assessments procedures, although the results varied depending on the different domains. Metacognition Index from the BRIEF presented stronger correlations with verbal working memory tests than with inhibition tests. Both the rating scales and the performance-based tests were significant predictors of Inattention and Hyperactivity/Impulsivity behaviors and the reading achievement measures. However, the BRIEF explained a greater percentage of variance in the case of the ADHD symptomatology, while the performance-based tests explained reading achievement to a greater degree. The implications of the findings for research and clinical practice are discussed. PMID:25972833

  4. Limitations of experiments performed in artificially made OECD standard soils for predicting cadmium, lead and zinc toxicity towards organisms living in natural soils.

    PubMed

    Sydow, Mateusz; Chrzanowski, Łukasz; Cedergreen, Nina; Owsianiak, Mikołaj

    2017-08-01

    Development of comparative toxicity potentials of cationic metals in soils for applications in hazard ranking and toxic impact assessment is currently jeopardized by the availability of experimental effect data. To compensate for this deficiency, data retrieved from experiments carried out in standardized artificial soils, like OECD soils, could potentially be tapped as a source of effect data. It is, however, unknown whether such data are applicable to natural soils where the variability in pore water concentrations of dissolved base cations is large, and where mass transfer limitations of metal uptake can occur. Here, free ion activity models (FIAM) and empirical regression models (ERM, with pH as a predictor) were derived from total metal EC50 values (concentration with effects in 50% of individuals) using speciation for experiments performed in artificial OECD soils measuring ecotoxicological endpoints for terrestrial earthworms, potworms, and springtails. The models were validated by predicting total metal based EC50 values using backward speciation employing an independent set of natural soils with missing information about ionic composition of pore water, as retrieved from a literature review. ERMs performed better than FIAMs. Pearson's r for log 10 -transformed total metal based EC50s values (ERM) ranged from 0.25 to 0.74, suggesting a general correlation between predicted and measured values. Yet, root-mean-square-error (RMSE) ranged from 0.16 to 0.87 and was either smaller or comparable with the variability of measured EC50 values, suggesting modest performance. This modest performance was mainly due to the omission of pore water concentrations of base cations during model development and their validation, as verified by comparisons with predictions of published terrestrial biotic ligand models. Thus, the usefulness of data from artificial OECD soils for global-scale assessment of terrestrial ecotoxic impacts of Cd, Pb and Zn in soils is limited due to relatively small variability of pore water concentrations of dissolved base cations in OECD soils, preventing their inclusion in development of predictive models. Our findings stress the importance of considering differences in ionic composition of soil pore water when characterizing terrestrial ecotoxicity of cationic metals in natural soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. UAV Swarm Tactics: An Agent-Based Simulation and Markov Process Analysis

    DTIC Science & Technology

    2013-06-01

    CRN Common Random Numbers CSV Comma Separated Values DoE Design of Experiment GLM Generalized Linear Model HVT High Value Target JAR Java ARchive JMF... Java Media Framework JRE Java runtime environment Mason Multi-Agent Simulator Of Networks MOE Measure Of Effectiveness MOP Measures Of Performance...with every set several times, and to write a CSV file with the results. Rather than scripting the agent behavior deterministically, the agents should

  6. Anthropometric profile of elite acrobatic gymnasts and prediction of role performance.

    PubMed

    Taboada-Iglesias, Yaiza; Gutiérrez-Sánchez, Águeda; Vernetta Santana, Mercedes

    2016-04-01

    This study is aimed at determining the anthropometric profile of acrobatic gymnasts, differentiating on the basis of their role. The sample consisted of 150 gymnasts (129 women and 21 men) from throughout Spain. The anthropometric measurements were taken according to the International Society for the Advancement of Kinanthropometry (ISAK) procedures. Morphological measurements, proportionality and somatotype were analyzed in both groups. A comparative analysis between groups and a prediction model were used to analyze the specific profile of each role. All morphological measurements showed significant differences (P<0.05) between tops and bases, the latter presenting higher values. The endomorphic element of the bases presented higher values than the tops, for whom the ectomorphy scores were higher. Bases have an endo-mesomorphic somatotype and tops present a balanced mesomorphic. There are no mesomorphy differences between the tops and bases. BMI was significantly higher in the bases (BMI=20.28 kg/m2). Proportionality differences between roles are shown. Both roles present negatives values for almost all variables studied except for the trochlear condyle of the humerus, the bicondyle of the femur and the wrist bistyloid breadth in tops and the wrist bistyloid breadth, the upper arm relaxed girths and maximum calf in bases. The best prediction model included thigh girth as the best explanatory covariate of role performance. Here are differences between both roles, bases being gymnasts of larger size than tops. However, they present no differences in the muscular component, as it might be expected.

  7. High-Performance Sensors Based on Resistance Fluctuations of Single-Layer-Graphene Transistors.

    PubMed

    Amin, Kazi Rafsanjani; Bid, Aveek

    2015-09-09

    One of the most interesting predicted applications of graphene-monolayer-based devices is as high-quality sensors. In this article, we show, through systematic experiments, a chemical vapor sensor based on the measurement of low-frequency resistance fluctuations of single-layer-graphene field-effect-transistor devices. The sensor has extremely high sensitivity, very high specificity, high fidelity, and fast response times. The performance of the device using this scheme of measurement (which uses resistance fluctuations as the detection parameter) is more than 2 orders of magnitude better than a detection scheme in which changes in the average value of the resistance is monitored. We propose a number-density-fluctuation-based model to explain the superior characteristics of a noise-measurement-based detection scheme presented in this article.

  8. Developing physician pay arrangements: the cash and care equation.

    PubMed

    Levitch, J H

    1998-11-01

    Developing physician compensation packages that help a healthcare organization meet its business objectives while satisfying physician pay expectations requires new ways of linking pay to physician performance. Such compensation arrangements specifically should include pay tied to defined performance standards, compensation linked to group performance, performance incentives based on realistic, achievable goals, work performance measured by common criteria, and similar pay ensured for similar work. Final pay arrangements also should include items that are sometimes overlooked, such as fully delineated job responsibilities, performance measures aligned correctly with performance areas, and the value of benefits considered in the cash compensation levels.

  9. Magnetic Resonance Imaging More Accurately Classifies Steatosis and Fibrosis in Patients With Nonalcoholic Fatty Liver Disease Than Transient Elastography.

    PubMed

    Imajo, Kento; Kessoku, Takaomi; Honda, Yasushi; Tomeno, Wataru; Ogawa, Yuji; Mawatari, Hironori; Fujita, Koji; Yoneda, Masato; Taguri, Masataka; Hyogo, Hideyuki; Sumida, Yoshio; Ono, Masafumi; Eguchi, Yuichiro; Inoue, Tomio; Yamanaka, Takeharu; Wada, Koichiro; Saito, Satoru; Nakajima, Atsushi

    2016-03-01

    Noninvasive methods have been evaluated for the assessment of liver fibrosis and steatosis in patients with nonalcoholic fatty liver disease (NAFLD). We compared the ability of transient elastography (TE) with the M-probe, and magnetic resonance elastography (MRE) to assess liver fibrosis. Findings from magnetic resonance imaging (MRI)-based proton density fat fraction (PDFF) measurements were compared with those from TE-based controlled attenuation parameter (CAP) measurements to assess steatosis. We performed a cross-sectional study of 142 patients with NAFLD (identified by liver biopsy; mean body mass index, 28.1 kg/m(2)) in Japan from July 2013 through April 2015. Our study also included 10 comparable subjects without NAFLD (controls). All study subjects were evaluated by TE (including CAP measurements), MRI using the MRE and PDFF techniques. TE identified patients with fibrosis stage ≥2 with an area under the receiver operating characteristic (AUROC) curve value of 0.82 (95% confidence interval [CI]: 0.74-0.89), whereas MRE identified these patients with an AUROC curve value of 0.91 (95% CI: 0.86-0.96; P = .001). TE-based CAP measurements identified patients with hepatic steatosis grade ≥2 with an AUROC curve value of 0.73 (95% CI: 0.64-0.81) and PDFF methods identified them with an AUROC curve value of 0.90 (95% CI: 0.82-0.97; P < .001). Measurement of serum keratin 18 fragments or alanine aminotransferase did not add value to TE or MRI for identifying nonalcoholic steatohepatitis. MRE and PDFF methods have higher diagnostic performance in noninvasive detection of liver fibrosis and steatosis in patients with NAFLD than TE and CAP methods. MRI-based noninvasive assessment of liver fibrosis and steatosis is a potential alternative to liver biopsy in clinical practice. UMIN Clinical Trials Registry No. UMIN000012757. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.

  10. [The influence of intellectual capital in performance evaluation: a case-study in the hospital sector].

    PubMed

    Bonacim, Carlos Alberto Grespan; Araújo, Adriana Maria Procópio de

    2010-06-01

    This paper contributes to public institutions with the adaptation of a performance evaluation tool based on private companies. The objective is to demonstrate how the impact of an educational activity might be measured in the economic value added for the society of a public university hospital. The paper was divided in four parts, despite the introductory and methodological aspects and the final remarks. First, the hospital sector is explained, specifically in the context of the public university hospitals. Then, the definitions, the nature and measure of the intellectual capital are presented, followed by the disclosure of the main economic performance evaluation models. Finally, an adapted model is presented, under the approach of the value based management, considering adjustments of the return and the respective investment measures, showing the impacts of the intellectual capital management and the education activity on the economic result of those institutions. The study was developed based on a methodology supported by a bibliographical research, using a comparative method procedure in the descriptive modality. At last, it is highlighted the importance of accountability for the society regarding the use of public resources and how this study can help in this way.

  11. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes.

    PubMed

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  12. The Type of Culture at a High Performance Schools and Low Performance School in the State of Kedah

    ERIC Educational Resources Information Center

    Daud, Yaakob; Raman, Arumugam; Don, Yahya; O. F., Mohd Sofian; Hussin, Fauzi

    2015-01-01

    This research aims to identify the type of culture at a High Performance School (HPS) and Low Performance School (LPS) in the state of Kedah. The research instrument used to measure the type of organizational culture was adapted from Organizational Culture Assessment Instrument (Cameron & Quinn, 2006) based on Competing Values Framework Quinn…

  13. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery.

    PubMed

    Garas, George; Cingolani, Isabella; Panzarasa, Pietro; Darzi, Ara; Athanasiou, Thanos

    2017-01-01

    Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations. Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus®) was performed (1974-2014). The virality of each cascade was measured as was innovation value (measured by the innovation index) derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample-NIS®). Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width) were found to correlate closely with the ranking by innovation value (Spearman's rank correlation coefficient = 0.758 (p = 0.01), 0.782 (p = 0.008), 0.624 (p = 0.05), respectively) which in turn matches the ranking based on real world big data from the NIS® (Spearman's coefficient = 0.673;p = 0.033). Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics developed may prove valuable especially in guiding policy makers, funding bodies, surgeons, and healthcare providers in the current climate of competing national priorities for investment.

  14. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery

    PubMed Central

    Cingolani, Isabella; Panzarasa, Pietro; Darzi, Ara; Athanasiou, Thanos

    2017-01-01

    Background Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations. Materials and methods Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus®) was performed (1974–2014). The virality of each cascade was measured as was innovation value (measured by the innovation index) derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample–NIS®). Results Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width) were found to correlate closely with the ranking by innovation value (Spearman’s rank correlation coefficient = 0.758 (p = 0.01), 0.782 (p = 0.008), 0.624 (p = 0.05), respectively) which in turn matches the ranking based on real world big data from the NIS® (Spearman’s coefficient = 0.673;p = 0.033). Conclusion Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics developed may prove valuable especially in guiding policy makers, funding bodies, surgeons, and healthcare providers in the current climate of competing national priorities for investment. PMID:28841648

  15. Exposure-based screening for Nipah virus encephalitis, Bangladesh.

    PubMed

    Sazzad, Hossain M S; Luby, Stephen P; Ströher, Ute; Daszak, Peter; Sultana, Sharmin; Afroj, Sayma; Rahman, Mahmudur; Gurley, Emily S

    2015-02-01

    We measured the performance of exposure screening questions to identify Nipah virus encephalitis in hospitalized encephalitis patients during the 2012-13 Nipah virus season in Bangladesh. The sensitivity (93%), specificity (82%), positive predictive value (37%), and negative predictive value (99%) results suggested that screening questions could more quickly identify persons with Nipah virus encephalitis.

  16. Laboratory Characterization of Gray Masonry Concrete

    DTIC Science & Technology

    2007-08-01

    Based on the appropriate values of posttest water content, wet density, and an assumed grain density of 2.61 Mg/m3, values of dry density, porosity...velocity measurements were performed on each specimen. The TXC tests exhibited a continuous increase in maximum principal stress difference with...14 Figure 3. Spring-arm lateral deformeter mounted on test

  17. Modeling the Performance of Direct-Detection Doppler Lidar Systems in Real Atmospheres

    NASA Technical Reports Server (NTRS)

    McGill, Matthew J.; Hart, William D.; McKay, Jack A.; Spinhirne, James D.

    1999-01-01

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems has assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar systems: the double-edge and the multi-channel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only about 10-20% compared to nighttime performance, provided a proper solar filter is included in the instrument design.

  18. Modeling the performance of direct-detection Doppler lidar systems including cloud and solar background variability.

    PubMed

    McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D

    1999-10-20

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.

  19. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  20. Fluid-flow-rate metrology: laboratory uncertainties and traceabilities

    NASA Astrophysics Data System (ADS)

    Mattingly, G. E.

    1991-03-01

    Increased concerns for improved fluid flowrate measurement are driving the fluid metering community-meter manufacturers and users alike-to search for better verification and documentation for their fluid measurements. These concerns affect both our domestic and international market places they permeate our technologies - aerospace chemical processes automotive bioengineering etc. They involve public health and safety and they impact our national defense. These concerns are based upon the rising value of fluid resources and products and the importance of critical material accountability. These values directly impact the accuracy needs of fluid buyers and sellers in custody transfers. These concerns impact the designers and operators of chemical process systems where control and productivity optimization depend critically upon measurement precision. Public health and safety depend upon the quality of numerous pollutant measurements - both liquid and gaseous. The performance testing of engines - both automotive and aircraft are critically based upon accurate fuel measurements - both liquid and oxidizer streams. Fluid flowrate measurements are established differently from counterparts in length and mass measurement systems because these have the benefits of " identity" standards. For rate measurement systems the metrology is based upon " derived standards" . These use facilities and transfer standards which are designed built characterized and used to constitute basic measurement capabilities and quantify performance - accuracy and precision. Because " identity standards" do not exist for flow measurements facsimiles or equivalents must

  1. Retooling Predictive Relations for non-volatile PM by Comparison to Measurements

    NASA Astrophysics Data System (ADS)

    Vander Wal, R. L.; Abrahamson, J. P.

    2015-12-01

    Non-volatile particulate matter (nvPM) emissions from jet aircraft at cruise altitude are of particular interest for climate and atmospheric processes but are difficult to measure and are normally approximated. To provide such inventory estimates the present approach is to use measured, ground-based values with scaling to cruise (engine operating) conditions. Several points are raised by this approach. First is what ground based values to use. Empirical and semi-empirical approaches, such as the revised first order approximation (FOA3) and formation-oxidation (FOX) methods, each with embedded assumptions are available to calculate a ground-based black carbon concentration, CBC. Second is the scaling relation that can depend upon the ratios of fuel-air equivalence, pressure, and combustor flame temperature. We are using measured ground-based values to evaluate the accuracy of present methods towards developing alternative methods for CBCby smoke number or via a semi-empirical kinetic method for the specific engine, CFM56-2C, representative of a rich-dome style combustor, and as one of the most prevalent engine families in commercial use. Applying scaling relations to measured ground based values and comparison to measurements at cruise evaluates the accuracy of current scaling formalism. In partnership with GE Aviation, performing engine cycle deck calculations enables critical comparison between estimated or predicted thermodynamic parameters and true (engine) operational values for the CFM56-2C engine. Such specific comparisons allow tracing differences between predictive estimates for, and measurements of nvPM to their origin - as either divergence of input parameters or in the functional form of the predictive relations. Such insights will lead to development of new predictive tools for jet aircraft nvPM emissions. Such validated relations can then be extended to alternative fuels with confidence in operational thermodynamic values and functional form. Comparisons will then be made between these new predictive relationships and measurements of nvPM from alternative fuels using ground and cruise data - as collected during NASA-led AAFEX and ACCESS field campaigns, respectively.

  2. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  3. Performance assessment of Vita Easy Shade spectrophotometer on colour measurement of aesthetic dental materials.

    PubMed

    AlGhazali, N; Burnside, G; Smith, R W; Preston, A J; Jarad, F D

    2011-12-01

    Four different shades were used to produce 20 samples of resin-based composite and 20 samples of porcelain to evaluate the performance ability of an intra oral test spectrophotometer compared to a reference spectrophotometer. The absolute colour coordinates CIELAB values measured with both spectrophotometers were significantly different (p < 0.001). However, a high correlation was found (p < 0.001) despite the low concordance noticed. The colour difference deltaE* values calculated between different shades also were significantly different between both spectrophotometers (p < 0.05). Therefore, the Easy Shade can be used in dental practice and dental research with some limitations.

  4. Experimental realization of generalized qubit measurements based on quantum walks

    NASA Astrophysics Data System (ADS)

    Zhao, Yuan-yuan; Yu, Neng-kun; Kurzyński, Paweł; Xiang, Guo-yong; Li, Chuan-Feng; Guo, Guang-Can

    2015-04-01

    We report an experimental implementation of a single-qubit generalized measurement scenario, the positive-operator valued measure (POVM), based on a quantum walk model. The qubit is encoded in a single-photon polarization. The photon performs a quantum walk on an array of optical elements, where the polarization-dependent translation is performed via birefringent beam displacers and a change of the polarization is implemented with the help of wave plates. We implement: (i) trine POVM, i.e., the POVM elements uniformly distributed on an equatorial plane of the Bloch sphere; (ii) symmetric-informationally-complete (SIC) POVM; and (iii) unambiguous discrimination of two nonorthogonal qubit states.

  5. A framework to measure the value of public health services.

    PubMed

    Jacobson, Peter D; Neumann, Peter J

    2009-10-01

    To develop a framework that public health practitioners could use to measure the value of public health services. Primary data were collected from August 2006 through March 2007. We interviewed (n=46) public health practitioners in four states, leaders of national public health organizations, and academic researchers. Using a semi-structured interview protocol, we conducted a series of qualitative interviews to define the component parts of value for public health services and identify methodologies used to measure value and data collected. The primary form of analysis is descriptive, synthesizing information across respondents as to how they measure the value of their services. Our interviews did not reveal a consensus on how to measure value or a specific framework for doing so. Nonetheless, the interviews identified some potential strategies, such as cost accounting and performance-based contracting mechanisms. The interviews noted implementation barriers, including limits to staff capacity and data availability. We developed a framework that considers four component elements to measure value: external factors that must be taken into account (i.e., mandates); key internal actions that a local health department must take (i.e., staff assessment); using appropriate quantitative measures; and communicating value to elected officials and the public.

  6. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  7. Antioxidant Capacity: Experimental Determination by EPR Spectroscopy and Mathematical Modeling.

    PubMed

    Polak, Justyna; Bartoszek, Mariola; Chorążewski, Mirosław

    2015-07-22

    A new method of determining antioxidant capacity based on a mathematical model is presented in this paper. The model was fitted to 1000 data points of electron paramagnetic resonance (EPR) spectroscopy measurements of various food product samples such as tea, wine, juice, and herbs with Trolox equivalent antioxidant capacity (TEAC) values from 20 to 2000 μmol TE/100 mL. The proposed mathematical equation allows for a determination of TEAC of food products based on a single EPR spectroscopy measurement. The model was tested on the basis of 80 EPR spectroscopy measurements of herbs, tea, coffee, and juice samples. The proposed model works for both strong and weak antioxidants (TEAC values from 21 to 2347 μmol TE/100 mL). The determination coefficient between TEAC values obtained experimentally and TEAC values calculated with proposed mathematical equation was found to be R(2) = 0.98. Therefore, the proposed new method of TEAC determination based on a mathematical model is a good alternative to the standard EPR method due to its being fast, accurate, inexpensive, and simple to perform.

  8. Performance measurement in surgery through the National Quality Forum.

    PubMed

    Hyder, Joseph A; Roy, Nathalie; Wakeam, Elliot; Hernandez, Roland; Kim, Simon P; Bader, Angela M; Cima, Robert R; Nguyen, Louis L

    2014-11-01

    Performance measurement has become central to surgical practice. We systematically reviewed all endorsed performance measures from the National Quality Forum, the national clearing house for performance measures in health care, to identify measures relevant to surgical practice and describe measure stewardship, measure types, and identify gaps in measurement. Performance measures current to June 2014 were categorized by denominator statement as either assessing surgical practice in specific or as part of a mixed medical and surgical population. Measures were further classified by surgical specialty, Donabedian measure type, patients, disease and events targeted, reporting eligibility, and measure stewards. Of 637 measures, 123 measures assessed surgical performance in specific and 123 assessed surgical performance in aggregate. Physician societies (51 of 123, 41.5%) were more common than government agencies (32 of 123, 26.0%) among measure stewards for surgical measures, in particular, the Society for Thoracic Surgery (n = 32). Outcomes measures rather than process measures were common among surgical measures (62 of 123, 50.4%) compared with aggregate medical/surgical measures (46 of 123, 37.4%). Among outcomes measures, death alone was the most commonly specified outcome (24 of 62, 38.7%). Only 1 surgical measure addressed patient-centered care and only 1 measure addressed hospital readmission. We found 7 current surgical measures eligible for value-based purchasing. Surgical society stewards and outcomes measure types, particularly for cardiac surgery, were well represented in the National Quality Forum. Measures addressing patient-centered outcomes and the value of surgical decision-making were not well represented and may be suitable targets for measure innovation. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Performance, physiological, and oculometer evaluation of VTOL landing displays

    NASA Technical Reports Server (NTRS)

    North, R. A.; Stackhouse, S. P.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.

  10. Language disturbance and functioning in first episode psychosis.

    PubMed

    Roche, Eric; Segurado, Ricardo; Renwick, Laoise; McClenaghan, Aisling; Sexton, Sarah; Frawley, Timothy; Chan, Carol K; Bonar, Maurice; Clarke, Mary

    2016-01-30

    Language disturbance has a central role in the presentation of psychotic disorders however its relationship with functioning requires further clarification, particularly in first episode psychosis (FEP). Both language disturbance and functioning can be evaluated with clinician-rated and performance-based measures. We aimed to investigate the concurrent association between clinician-rated and performance-based measures of language disturbance and functioning in FEP. We assessed 108 individuals presenting to an Early Intervention in Psychosis Service in Ireland. Formal thought disorder (FTD) dimensions and bizarre idiosyncratic thinking (BIT) were rated with structured assessment tools. Functioning was evaluated with a performance-based instrument, a clinician-rated measure and indicators of real-world functioning. The disorganisation dimension of FTD was significantly associated with clinician-rated measures of occupational and social functioning (Beta=-0.19, P<0.05 and Beta=-0.31, P<0.01, respectively). BIT was significantly associated with the performance-based measure of functioning (Beta=-0.22, P<0.05). Language disturbance was of less value in predicting real-world measures of functioning. Clinician-rated and performance-based assessments of language disturbance are complementary and each has differential associations with functioning. Communication disorders should be considered as a potential target for intervention in FEP, although further evaluation of the longitudinal relationship between language disturbance and functioning should be undertaken. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Outcomes-Based Funding in Historical and Comparative Context. Lumina Issue Papers

    ERIC Educational Resources Information Center

    Hearn, James C.

    2015-01-01

    With the advent of outcomes-based funding policies, state policymakers are increasingly committed to basing public college and university funding on how institutions perform on valued measures such as program progress and degree completion. This rising emphasis is considered here in the historical context of three earlier state funding approaches:…

  12. The temporal changes in saturated hydraulic conductivity of forest soils

    NASA Astrophysics Data System (ADS)

    Kornél Szegedi, Balázs

    2015-04-01

    I investigated the temporal variability of forest soils infiltration capacity through compaction. I performed the measurements of mine in The Botanical Garden of Sopron between 15.09.2014 - 15.10.2014. I performed the measurements in 50-50 cm areas those have been cleaned of vegetation, where I measured the bulk density and volume of soil hydraulic conductivity with Tension Disk Infiltrometer (TDI) in 3-3 repetitions. I took undisturbed 160 cm3 from the upper 5 cm layer of the cleaned soil surface for the bulk density measurements. Then I loosened the top 10-15 cm layer of the soil surface with spade. After the cultivation of the soil I measured the bulk density and volume of water conductivity also 3-3 repetitions. Later I performed the hydraulic conductivity (Ksat) using the TDI and bulk density measurements on undisturbed samples on a weekly basis in the study area. I illustrated the measured hydraulic conductivity and bulk density values as a function of cumulative rainfall by using simple graphical and statistical methods. The rate of the soil compaction pace was fast and smooth based on the change of the measured bulk density values. There was a steady downward trend in hydraulic conductivity parallel the compaction. The cultivation increased the hydraulic conductivity nearly fourfold compared to original, than decreased to half by 1 week. In the following the redeposition rate declined, but based on the literature data, almost 3-4 months enough to return the original state before cultivation of the soil hydraulic conductivity and bulk density values. This publication has been supported by AGRARKLIMA.2 VKSZ_12-1-2013-0034 project.

  13. Online physician ratings fail to predict actual performance on measures of quality, value, and peer review.

    PubMed

    Daskivich, Timothy J; Houman, Justin; Fuller, Garth; Black, Jeanne T; Kim, Hyung L; Spiegel, Brennan

    2018-04-01

    Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.

  14. Indoor radon activity concentration measurements in the great historical museums of University of Naples, Italy.

    PubMed

    Quarto, Maria; Pugliese, Mariagabriella; Loffredo, Filomena; La Verde, Giuseppe; Roca, Vincenzo

    2016-01-01

    Indoor radon activity concentrations were measured in seven Museums of University of Naples, very old buildings of great historical value. The measurements were performed using a time-integrated technique based on LR-115 solid-state nuclear track detectors. The annual average concentrations were found to range from 40 up to 1935 Bq m(-3) and in 26 % of measurement sites, the values were higher than 500 Bq m(-3) which is the limit value of Italian legislation for workplace. Moreover, we analysed the seasonal variations of radon concentrations observing the highest average in cold weather than in warm. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Development of a Monte Carlo Simulation for APD-Based PET Detectors Using a Continuous Scintillating Crystal

    NASA Astrophysics Data System (ADS)

    Clowes, P.; Mccallum, S.; Welch, A.

    2006-10-01

    We are currently developing a multilayer avalanche photodiode (APD)-based detector for use in positron emission tomography (PET), which utilizes thin continuous crystals. In this paper, we developed a Monte Carlo-based simulation to aid in the design of such detectors. We measured the performance of a detector comprising a single thin continuous crystal (3.1 mm times 9.5 mm times 9.5 mm) of lutetium yttrium ortho-silicate (LYSO) and an APD array (4times4) elements; each element 1.6 mm2 and on a 2.3 mm pitch. We showed that a spatial resolution of better than 2.12 mm is achievable throughout the crystal provided that we adopt a Statistics Based Positioning (SBP) Algorithm. We then used Monte Carlo simulation to model the behavior of the detector. The accuracy of the Monte Carlo simulation was verified by comparing measured and simulated parent datasets (PDS) for the SBP algorithm. These datasets consisted of data for point sources at 49 positions uniformly distributed over the detector area. We also calculated the noise in the detector circuit and verified this value by measurement. The noise value was included in the simulation. We show that the performance of the simulation closely matches the measured performance. The simulations were extended to investigate the effect of different noise levels on positioning accuracy. This paper showed that if modest improvements could be made in the circuit noise then positioning accuracy would be greatly improved. In summary, we have developed a model that can be used to simulate the performance of a variety of APD-based continuous crystal PET detectors

  16. A Value-Added Estimate of Higher Education Quality of US States

    ERIC Educational Resources Information Center

    Zhang, Lei

    2009-01-01

    States differ substantially in higher education policies. Little is known about the effects of state policies on the performance of public colleges and universities, largely because no clear measures of college quality exist. In this paper, I estimate the average quality of public colleges of US states based on the value-added to individuals'…

  17. English Value-Added Measures: Examining the Limitations of School Performance Measurement

    ERIC Educational Resources Information Center

    Perry, Thomas

    2016-01-01

    Value-added "Progress" measures are to be introduced for all English schools in 2016 as "headline" measures of school performance. This move comes despite research highlighting high levels of instability in value-added measures and concerns about the omission of contextual variables in the planned measure. This article studies…

  18. Value-Added Measures of Education Performance: Clearing Away the Smoke and Mirrors. Policy Brief 10-4

    ERIC Educational Resources Information Center

    Harris, Douglas N.

    2010-01-01

    In this policy brief, the author explores the problems with attainment measures when it comes to evaluating performance at the school level, and explores the best uses of value-added measures. These value-added measures, the author writes, are useful for sorting out-of-school influences from school influences or from teacher performance, giving…

  19. Value-Based Assessment of Radiology Reporting Using Radiologist-Referring Physician Two-Way Feedback System-a Design Thinking-Based Approach.

    PubMed

    Shaikh, Faiq; Hendrata, Kenneth; Kolowitz, Brian; Awan, Omer; Shrestha, Rasu; Deible, Christopher

    2017-06-01

    In the era of value-based healthcare, many aspects of medical care are being measured and assessed to improve quality and reduce costs. Radiology adds enormously to health care costs and is under pressure to adopt a more efficient system that incorporates essential metrics to assess its value and impact on outcomes. Most current systems tie radiologists' incentives and evaluations to RVU-based productivity metrics and peer-review-based quality metrics. In a new potential model, a radiologist's performance will have to increasingly depend on a number of parameters that define "value," beginning with peer review metrics that include referrer satisfaction and feedback from radiologists to the referring physician that evaluates the potency and validity of clinical information provided for a given study. These new dimensions of value measurement will directly impact the cascade of further medical management. We share our continued experience with this project that had two components: RESP (Referrer Evaluation System Pilot) and FRACI (Feedback from Radiologist Addressing Confounding Issues), which were introduced to the clinical radiology workflow in order to capture referrer-based and radiologist-based feedback on radiology reporting. We also share our insight into the principles of design thinking as applied in its planning and execution.

  20. SAGES quality initiative: an introduction.

    PubMed

    Lidor, Anne; Telem, Dana; Bower, Curtis; Sinha, Prashant; Orlando, Rocco; Romanelli, John

    2017-08-01

    The Medicare program has transitioned to paying healthcare providers based on the quality of care delivered, not on the quantity. In May 2015, SAGES held its first ever Quality Summit. The goal of this meeting was to provide us with the information necessary to put together a strategic plan for our Society over the next 3-5 years, and to participate actively on a national level to help develop valid measures of quality of surgery. The transition to value-based medicine requires that providers are now measured and reimbursed based on the quality of services they provide rather than the quantity of patients in their care. As of 2014, quality measures must cover 3 of the 6 available National Quality domains. Physician quality reporting system measures are created via a vigorous process which is initiated by the proposal of the quality measure and subsequent validation. Commercial, non-profit, and governmental agencies have now been engaged in the measurement of hospital performance through structural measures, process measures, and increasingly with outcomes measures. This more recent focus on outcomes measures have been linked to hospital payments through the Value-Based Purchasing program. Outcomes measures of quality drive CMS' new program, MACRA, using two formats: Merit-based incentive programs and alternative payment models. But, the quality of information now available is highly variable and difficult for the average consumer to use. Quality metrics serve to guide efforts to improve performance and for consumer education. Professional organizations such as SAGES play a central role in defining the agenda for improving quality, outcomes, and safety. The mission of SAGES is to improve the quality of patient care through education, research, innovation, and leadership, principally in gastrointestinal and endoscopic surgery.

  1. Characterization of measurements in quantum communication. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chan, V. W. S.

    1975-01-01

    A characterization of quantum measurements by operator valued measures is presented. The generalized measurements include simultaneous approximate measurement of noncommuting observables. This characterization is suitable for solving problems in quantum communication. Two realizations of such measurements are discussed. The first is by adjoining an apparatus to the system under observation and performing a measurement corresponding to a self-adjoint operator in the tensor-product Hilbert space of the system and apparatus spaces. The second realization is by performing, on the system alone, sequential measurements that correspond to self-adjoint operators, basing the choice of each measurement on the outcomes of previous measurements. Simultaneous generalized measurements are found to be equivalent to a single finer grain generalized measurement, and hence it is sufficient to consider the set of single measurements. An alternative characterization of generalized measurement is proposed. It is shown to be equivalent to the characterization by operator-values measures, but it is potentially more suitable for the treatment of estimation problems. Finally, a study of the interaction between the information-carrying system and a measurement apparatus provides clues for the physical realizations of abstractly characterized quantum measurements.

  2. Sustainability of quality improvement following removal of pay-for-performance incentives.

    PubMed

    Benzer, Justin K; Young, Gary J; Burgess, James F; Baker, Errol; Mohr, David C; Charns, Martin P; Kaboli, Peter J

    2014-01-01

    Although pay-for-performance (P4P) has become a central strategy for improving quality in US healthcare, questions persist about the effectiveness of these programs. A key question is whether quality improvement that occurs as a result of P4P programs is sustainable, particularly if incentives are removed. To investigate sustainability of performance levels following removal of performance-based incentives. Observational cohort study that capitalized on a P4P program within the Veterans Health Administration (VA) that included adoption and subsequent removal of performance-based incentives for selected inpatient quality measures. The study sample comprised 128 acute care VA hospitals where performance was assessed between 2004 and 2010. VA system managers set annual performance goals in consultation with clinical leaders, and report performance scores to medical centers on a quarterly basis. These scores inform performance-based incentives for facilities and their managers. Bonuses are distributed based on the attainment of these performance goals. Seven quality of care measures for acute coronary syndrome, heart failure, and pneumonia linked to performance-based incentives. Significant improvements in performance were observed for six of seven quality of care measures following adoption of performance-based incentives and were maintained up to the removal of the incentive; subsequently, the observed performance levels were sustained. This is a quasi-experimental study without a comparison group; causal conclusions are limited. The maintenance of performance levels after removal of a performance-based incentive has implications for the implementation of Medicare's value-based purchasing initiative and other P4P programs. Additional research is needed to better understand human and system-level factors that mediate sustainability of performance-based incentives.

  3. The performance of single and multi-collector ICP-MS instruments for fast and reliable 34S/32S isotope ratio measurements.

    PubMed

    Hanousek, Ondrej; Brunner, Marion; Pröfrock, Daniel; Irrgeher, Johanna; Prohaska, Thomas

    2016-11-14

    The performance and validation characteristics of different single collector inductively coupled plasma mass spectrometers based on different technical principles (ICP-SFMS, ICP-QMS in reaction and collision modes, and ICP-MS/MS) were evaluated in comparison to the performance of MC ICP-MS for fast and reliable S isotope ratio measurements. The validation included the determination of LOD, BEC, measurement repeatability, within-lab reproducibility and deviation from certified values as well as a study on instrumental isotopic fractionation (IIF) and the calculation of the combined standard measurement uncertainty. Different approaches of correction for IIF applying external intra-elemental IIF correction (aka standard-sample bracketing) using certified S reference materials and internal inter-elemental IIF (aka internal standardization) correction using Si isotope ratios in MC ICP-MS are explained and compared. The resulting combined standard uncertainties of examined ICP-QMS systems were not better than 0.3-0.5% ( u c,rel ), which is in general insufficient to differentiate natural S isotope variations. Although the performance of the single collector ICP-SFMS is better (single measurement u c,rel = 0.08%), the measurement reproducibility (>0.2%) is the major limit of this system and leaves room for improvement. MC ICP-MS operated in the edge mass resolution mode, applying bracketing for correction of IIF, provided isotope ratio values with the highest quality (relative combined measurement uncertainty: 0.02%; deviation from the certified value: <0.002%).

  4. Magnetic field penetration in niobium- and vanadium-based Josephson junctions

    NASA Astrophysics Data System (ADS)

    Cucolo, A. M.; Pace, S.; Vaglio, R.; di Chiara, A.; Peluso, G.; Russo, M.

    1983-02-01

    Measurements on the temperature dependence of the magnetic field penetration in Nb-NbxOy-Pb and V-VxOy-Pb Josephson junctions have been performed. Results on the zero-temperature penetration depth in niobium films are far above the bulk values although consistent with other measurements on junctions reported in the literature. For vanadium junctions anomalously large penetration depth values are obtained at low temperatures. Nevertheless, the temperature dependence is in reasonable agreement with the local dirty limit model.

  5. Air Pollution Potential from Electroplating Operations.

    ERIC Educational Resources Information Center

    Diamond, Philip

    Measurements were made of emission rates from electroplating operations considered to have maximum air pollution potential. Sampling was performed at McClellan and additional data from a previous survey at Hill Air Force Base was used. Values obtained were extremely low. Based on existing Federal standards, no collectors are specifically required…

  6. Strategic Reporting Tool: Balanced Scorecards in Higher Education

    ERIC Educational Resources Information Center

    Lyddon, Jan W.; McComb, Bruce E.

    2008-01-01

    In this toolbox article, the authors describe the recommended steps for creating a community college balanced scorecard that measures and reports on key performance indicators based on targets and signal values to end-users, college constituents and external stakeholders. Based on extensive experience in the field, the authors provide a…

  7. Performance-based quality assurance/quality control (QA/QC) acceptance procedures for in-place soil testing phase 3.

    DOT National Transportation Integrated Search

    2015-01-01

    One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. : As design criteria transition from empirical to mechanistic-empirical, soil test methods and equip...

  8. Value-Added and Other Methods for Measuring School Performance: An Analysis of Performance Measurement Strategies in Teacher Incentive Fund Proposals. Research Brief

    ERIC Educational Resources Information Center

    National Center on Performance Incentives, 2008

    2008-01-01

    In "Value-Added and Other Methods for Measuring School Performance: An Analysis of Performance Measurement Strategies in Teacher Incentive Fund Proposals"--a paper presented at the February 2008 National Center on Performance Incentives research to policy conference--Robert Meyer and Michael Christian examine select performance-pay plans…

  9. An Interlaboratory Evaluation of Drift Tube Ion Mobility–Mass Spectrometry Collision Cross Section Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stow, Sarah M.; Causon, Tim J.; Zheng, Xueyun

    Collision cross section (CCS) measurements resulting from ion mobility-mass spectrometry (IM-MS) experiments provide a promising orthogonal dimension of structural information in MS-based analytical separations. As with any molecular identifier, interlaboratory standardization must precede broad range integration into analytical workflows. In this study, we present a reference drift tube ion mobility mass spectrometer (DTIM-MS) where improvements on the measurement accuracy of experimental parameters influencing IM separations provide standardized drift tube, nitrogen CCS values (DTCCSN2) for over 120 unique ion species with the lowest measurement uncertainty to date. The reproducibility of these DTCCSN2 values are evaluated across three additional laboratories on amore » commercially available DTIM-MS instrument. The traditional stepped field CCS method performs with a relative standard deviation (RSD) of 0.29% for all ion species across the three additional laboratories. The calibrated single field CCS method, which is compatible with a wide range of chromatographic inlet systems, performs with an average, absolute bias of 0.54% to the standardized stepped field DTCCSN2 values on the reference system. The low RSD and biases observed in this interlaboratory study illustrate the potential of DTIM-MS for providing a molecular identifier for a broad range of discovery based analyses.« less

  10. The goal of value-based medicine analyses: comparability. The case for neovascular macular degeneration.

    PubMed

    Brown, Gary C; Brown, Melissa M; Brown, Heidi C; Kindermann, Sylvia; Sharma, Sanjay

    2007-01-01

    To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers a 7.8% to 10.7% value gain for the treatment of classic subfoveal choroidal neovascularization. Intravitreal ranibizumab therapy confers greater than a 15% value gain for the treatment of subfoveal occult and minimally classic subfoveal choroidal neovascularization. The majority of cost-utility studies performed on interventions for neovascular macular degeneration are value-based medicine studies and thus are comparable. Value-based analyses of neovascular ARMD monotherapies demonstrate the power of value-based medicine to improve quality of care and concurrently maximize the efficacy of healthcare resource use in public policy. The comparability of value-based medicine cost-utility analyses has important implications for overall practice standards and public policy. The adoption of value-based medicine standards can greatly facilitate the goal of higher-quality care and maximize the best use of healthcare funds.

  11. THE GOAL OF VALUE-BASED MEDICINE ANALYSES: COMPARABILITY. THE CASE FOR NEOVASCULAR MACULAR DEGENERATION

    PubMed Central

    Brown, Gary C.; Brown, Melissa M.; Brown, Heidi C.; Kindermann, Sylvia; Sharma, Sanjay

    2007-01-01

    Purpose To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). Methods A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Results Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers a 7.8% to 10.7% value gain for the treatment of classic subfoveal choroidal neovascularization. Intravitreal ranibizumab therapy confers greater than a 15% value gain for the treatment of subfoveal occult and minimally classic subfoveal choroidal neovascularization. Conclusions The majority of cost-utility studies performed on interventions for neovascular macular degeneration are value-based medicine studies and thus are comparable. Value-based analyses of neovascular ARMD monotherapies demonstrate the power of value-based medicine to improve quality of care and concurrently maximize the efficacy of healthcare resource use in public policy. The comparability of value-based medicine cost-utility analyses has important implications for overall practice standards and public policy. The adoption of value-based medicine standards can greatly facilitate the goal of higher-quality care and maximize the best use of healthcare funds. PMID:18427606

  12. Optimization of motion control laws for tether crawler or elevator systems

    NASA Technical Reports Server (NTRS)

    Swenson, Frank R.; Von Tiesenhausen, Georg

    1988-01-01

    Based on the proposal of a motion control law by Lorenzini (1987), a method is developed for optimizing motion control laws for tether crawler or elevator systems in terms of the performance measures of travel time, the smoothness of acceleration and deceleration, and the maximum values of velocity and acceleration. The Lorenzini motion control law, based on powers of the hyperbolic tangent function, is modified by the addition of a constant-velocity section, and this modified function is then optimized by parameter selections to minimize the peak acceleration value for a selected travel time or to minimize travel time for the selected peak values of velocity and acceleration. It is shown that the addition of a constant-velocity segment permits further optimization of the motion control law performance.

  13. The use of artificial neural networks and multiple linear regression to predict rate of medical waste generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jahandideh, Sepideh; Jahandideh, Samad; Asadabadi, Ebrahim Barzegari

    2009-11-15

    Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performancemore » of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.« less

  14. Measurement issues in the evaluation of chronic disease self-management programs.

    PubMed

    Nolte, Sandra; Elsworth, Gerald R; Newman, Stanton; Osborne, Richard H

    2013-09-01

    To provide an in-depth analysis of outcome measures used in the evaluation of chronic disease self-management programs consistent with the Stanford curricula. Based on a systematic review on self-management programs, effect sizes derived from reported outcome measures are categorized according to the quality of life appraisal model developed by Schwartz and Rapkin which classifies outcomes from performance-based measures (e.g., clinical outcomes) to evaluation-based measures (e.g., emotional well-being). The majority of outcomes assessed in self-management trials are based on evaluation-based methods. Overall, effects on knowledge--the only performance-based measure observed in selected trials--are generally medium to large. In contrast, substantially more inconsistent results are found for both perception- and evaluation-based measures that mostly range between nil and small positive effects. Effectiveness of self-management interventions and resulting recommendations for health policy makers are most frequently derived from highly variable evaluation-based measures, that is, types of outcomes that potentially carry a substantial amount of measurement error and/or bias such as response shift. Therefore, decisions regarding the value and efficacy of chronic disease self-management programs need to be interpreted with care. More research, especially qualitative studies, is needed to unravel cognitive processes and the role of response shift bias in the measurement of change.

  15. Method and apparatus for in-situ detection and isolation of aircraft engine faults

    NASA Technical Reports Server (NTRS)

    Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)

    2007-01-01

    A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.

  16. Comparison of Bobath based and movement science based treatment for stroke: a randomised controlled trial.

    PubMed

    van Vliet, P M; Lincoln, N B; Foxall, A

    2005-04-01

    Bobath based (BB) and movement science based (MSB) physiotherapy interventions are widely used for patients after stroke. There is little evidence to suggest which is most effective. This single-blind randomised controlled trial evaluated the effect of these treatments on movement abilities and functional independence. A total of 120 patients admitted to a stroke rehabilitation ward were randomised into two treatment groups to receive either BB or MSB treatment. Primary outcome measures were the Rivermead Motor Assessment and the Motor Assessment Scale. Secondary measures assessed functional independence, walking speed, arm function, muscle tone, and sensation. Measures were performed by a blinded assessor at baseline, and then at 1, 3, and 6 months after baseline. Analysis of serial measurements was performed to compare outcomes between the groups by calculating the area under the curve (AUC) and inserting AUC values into Mann-Whitney U tests. Comparison between groups showed no significant difference for any outcome measures. Significance values for the Rivermead Motor Assessment ranged from p = 0.23 to p = 0.97 and for the Motor Assessment Scale from p = 0.29 to p = 0.87. There were no significant differences in movement abilities or functional independence between patients receiving a BB or an MSB intervention. Therefore the study did not show that one approach was more effective than the other in the treatment of stroke patients.

  17. Comparison of Bobath based and movement science based treatment for stroke: a randomised controlled trial

    PubMed Central

    van Vliet, P M; Lincoln, N; Foxall, A

    2005-01-01

    Objectives: Bobath based (BB) and movement science based (MSB) physiotherapy interventions are widely used for patients after stroke. There is little evidence to suggest which is most effective. This single-blind randomised controlled trial evaluated the effect of these treatments on movement abilities and functional independence. Methods: A total of 120 patients admitted to a stroke rehabilitation ward were randomised into two treatment groups to receive either BB or MSB treatment. Primary outcome measures were the Rivermead Motor Assessment and the Motor Assessment Scale. Secondary measures assessed functional independence, walking speed, arm function, muscle tone, and sensation. Measures were performed by a blinded assessor at baseline, and then at 1, 3, and 6 months after baseline. Analysis of serial measurements was performed to compare outcomes between the groups by calculating the area under the curve (AUC) and inserting AUC values into Mann-Whitney U tests. Results: Comparison between groups showed no significant difference for any outcome measures. Significance values for the Rivermead Motor Assessment ranged from p = 0.23 to p = 0.97 and for the Motor Assessment Scale from p = 0.29 to p = 0.87. Conclusions: There were no significant differences in movement abilities or functional independence between patients receiving a BB or an MSB intervention. Therefore the study did not show that one approach was more effective than the other in the treatment of stroke patients. PMID:15774435

  18. Infrastructure performance of irrigation canal to irrigation efficiency of irrigation area of Candi Limo in Mojokerto District

    NASA Astrophysics Data System (ADS)

    Kisnanto, S.; Hadiani, R. R. R.; Ikhsan, C.

    2018-03-01

    Performance is a measure of infrastructure success in delivering the benefits corresponding it’s design implementation. Debit efficiency is a comparison between outflow debit and inflow debit. Irrigation canal performance is part of the overall performance aspects of an irrigation area. The greater of the canal performance will be concluded that the canal is increasingly able to meet the planned benefits, need to be seen its comparison between the performance and debit efficiency of the canal. The existing problems in the field that the value of the performance of irrigation canals are not always comparable to the debit efficiency. This study was conducted to describe the relationship between the performance of the canal to the canal debit efficiency. The study was conducted at Candi Limo Irrigation Area in Mojokerto Disctrict under the authority of Pemerintahan Provinsi Jawa Timur. The primary canal and secondary canal are surveyed to obtain data. The physical condition of the primary and secondary canals into the material of this study also. Primary and secondary canal performance based on the physical condition in the field. Measurement inflow and outflow debit into the data for the calculation of the debit efficiency. The instrument used in this study such as the current meter for debit measurements in the field as a solution when there is a building measure in the field were damaged, also using the meter and the camera. Permen PU No.32 is used to determine the value of the performance of the canal, while the efficiency analysis to calculate a comparison value between outflow and inflow debit. The process of data running processing by performing the measurement and calculation of the performance of the canal, the canal debit efficiency value calculation, and display a graph of the relationship between the value of the performance with the debit efficiency in each canal. The expected results of this study that the performance value on the primary canal in the range of 0 to 100 % with debit efficiency value in the range of 0 to 100 %, while for the secondary canal 1 has a performance range between 0 to 100% with efficiency ranges between 0 to 100%, the performance of the secondary canals 2 ranges between 0 to 100% with efficiencies ranging from 0 to 100%, the secondary canal 3 performance ranges between 0 to 100% efficiency ranges between 0 to 100%, the secondary canal 4 performance ranges between 0 to 100% efficiency ranges between 0 to 100% and secondary canals 5 performance ranges between 0 to 100% efficiency ranges between 0 to 100%. For the tendency to expect from the performance and efficiency of the debit canal can have a proportional clockwise or counterclockwise, which amount can be random. The tendency to be graphed the relationship between performance and efficiency of the debit of each segment studied canal.

  19. Prognostic value of electroencephalography (EEG) for brain injury after cardiopulmonary resuscitation.

    PubMed

    Feng, Guibo; Jiang, Guohui; Li, Zhiwei; Wang, Xuefeng

    2016-06-01

    Cardiac arrest (CA) patients can experience neurological sequelae or even death after successful cardiopulmonary resuscitation (CPR) due to cerebral hypoxia- and ischemia-reperfusion-mediated brain injury. Thus, it is important to perform early prognostic evaluations in CA patients. Electroencephalography (EEG) is an important tool for determining the prognosis of hypoxic-ischemic encephalopathy due to its real-time measurement of brain function. Based on EEG, burst suppression, a burst suppression ratio >0.239, periodic discharges, status epilepticus, stimulus-induced rhythmic, periodic or ictal discharges, non-reactive EEG, and the BIS value based on quantitative EEG may be associated with the prognosis of CA after successful CPR. As measures of neural network integrity, the values of small-world characteristics of the neural network derived from EEG patterns have potential applications.

  20. Analysis of the Seismic Performance of Isolated Buildings according to Life-Cycle Cost

    PubMed Central

    Dang, Yu; Han, Jian-ping; Li, Yong-tao

    2015-01-01

    This paper proposes an indicator of seismic performance based on life-cycle cost of a building. It is expressed as a ratio of lifetime damage loss to life-cycle cost and determines the seismic performance of isolated buildings. Major factors are considered, including uncertainty in hazard demand and structural capacity, initial costs, and expected loss during earthquakes. Thus, a high indicator value indicates poor building seismic performance. Moreover, random vibration analysis is conducted to measure structural reliability and evaluate the expected loss and life-cycle cost of isolated buildings. The expected loss of an actual, seven-story isolated hospital building is only 37% of that of a fixed-base building. Furthermore, the indicator of the structural seismic performance of the isolated building is much lower in value than that of the structural seismic performance of the fixed-base building. Therefore, isolated buildings are safer and less risky than fixed-base buildings. The indicator based on life-cycle cost assists owners and engineers in making investment decisions in consideration of structural design, construction, and expected loss. It also helps optimize the balance between building reliability and building investment. PMID:25653677

  1. Analysis of the seismic performance of isolated buildings according to life-cycle cost.

    PubMed

    Dang, Yu; Han, Jian-Ping; Li, Yong-Tao

    2015-01-01

    This paper proposes an indicator of seismic performance based on life-cycle cost of a building. It is expressed as a ratio of lifetime damage loss to life-cycle cost and determines the seismic performance of isolated buildings. Major factors are considered, including uncertainty in hazard demand and structural capacity, initial costs, and expected loss during earthquakes. Thus, a high indicator value indicates poor building seismic performance. Moreover, random vibration analysis is conducted to measure structural reliability and evaluate the expected loss and life-cycle cost of isolated buildings. The expected loss of an actual, seven-story isolated hospital building is only 37% of that of a fixed-base building. Furthermore, the indicator of the structural seismic performance of the isolated building is much lower in value than that of the structural seismic performance of the fixed-base building. Therefore, isolated buildings are safer and less risky than fixed-base buildings. The indicator based on life-cycle cost assists owners and engineers in making investment decisions in consideration of structural design, construction, and expected loss. It also helps optimize the balance between building reliability and building investment.

  2. The influence of crystal habit on the prediction of dry powder inhalation formulation performance using the cohesive-adhesive force balance approach.

    PubMed

    Hooton, Jennifer C; Jones, Matthew D; Harris, Haggis; Shur, Jagdeep; Price, Robert

    2008-09-01

    The aim of this investigation was to study the influence of crystalline habit of active pharmaceutical ingredients on the cohesive-adhesive force balance within model dry powder inhaler (DPI) formulations and the corresponding affect on DPI formulation performance. The cohesive-adhesive balance (CAB) approach to colloid probe atomic force microscopy (AFM) was employed to determine the cohesive and adhesive interactions of micronized budesonide particles against the {102} and {002} faces of budesonide single crystals and crystalline substrates of different sugars (cyclodextrin, lactose, trehalose, raffinose, and xylitol), respectively. These data were used to measure the relative level of cohesion and adhesion via CAB and the possible influence on in vitro performance of a carrier-based DPI formulation. Varying the crystal habit of the drug had a significant effect on the cohesive measurement of micronized budesonide probes, with the cohesive values on the {102} faces being approximately twice that on the {002} crystal faces. However, although different CAB values were measured with the sugars with respect to the crystal faces chosen for the cohesive-based measurement, the overall influence on the rank order of the CAB values was not directly influenced. For these data sets, the CAB gradient indicated that a decrease in the dominance of the adhesive forces led to a concomitant increase in fine particle delivery, reaching a plateau as the cohesive forces became dominant. The study suggested that crystal habit of the primary drug crystals influences the cohesive interactions and the resulting force balance measurements of colloid probe CAB analysis.

  3. Cut-off of anthropometry measurement and nutritional status among elderly outpatient in Indonesia: multi-centre study.

    PubMed

    Setiati, Siti; Istanti, Rahmi; Andayani, Rejeki; Kuswardhani, R A Tuty; Aryana, I G P Suka; Putu, I Dewa; Apandi, M; Ichwani, Jusri; Soewoto, Sumarmi; Dinda, Rose; Mustika, Syifa

    2010-10-01

    To obtain the cut-off value of anthropometric measurements and nutritional status of elderly in Indonesia. A multicentre-cross sectional study was performed at 9 hospitals in Indonesia. The data collected comprises of samples characteristics, anthropometric measurements (weight, height, trisep, bisep, subscapular, suprailiac, and circumference of the hip, waist, arm, calf, and thigh), albumin value, MNA score and ADL Index of Barthel. A total of 702 subjects were collected. The average value of serum albumin is 4.28 g/dl, with 98% subjects had normal serum albumin (> 3.5 g/dl). The mean MNA score and BMI was 23.07 and 22.54 respectively. Most of subjects (56.70%) had risk of malnutrition based on MNA score, and 45.01% had normal nutritional status based on body mass index. Average value of several anthropometric measures (weight, stature, and body mass index; sub-scapular and supra-iliac skinfolds; thigh, calf, mid-arm, and waist circumferences) in various age groups in both groups of women and men were obtained. Cut-off values of various anthropometric indicators were also analyzed in this study with MNA as a gold standard. This study showed age related anthropometric measurement differences in both men and women aged 60 years and older.

  4. Documenting Student Learning in Music Performance: A Framework

    ERIC Educational Resources Information Center

    Wesolowski, Brian

    2014-01-01

    A fundamental aim of the Race to the Top agenda is to assess the effectiveness of teachers based on value-added growth measurement models of student achievement. However, in nontested grades and subject areas, such as music, alternative assessment types are being considered, including district-, school-, or teacher-developed measures. This article…

  5. A fuzzy set approach for reliability calculation of valve controlling electric actuators

    NASA Astrophysics Data System (ADS)

    Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.

    2017-02-01

    The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.

  6. Supply chain value creation methodology under BSC approach

    NASA Astrophysics Data System (ADS)

    Golrizgashti, Seyedehfatemeh

    2014-06-01

    The objective of this paper is proposing a developed balanced scorecard approach to measure supply chain performance with the aim of creating more value in manufacturing and business operations. The most important metrics have been selected based on experts' opinion acquired by in-depth interviews focused on creating more value for stakeholders. Using factor analysis method, a survey research has been used to categorize selected metrics into balanced scorecard perspectives. The result identifies the intensity of correlation between perspectives and cause-and-effect chains among them using statistical method based on a real case study in home appliance manufacturing industries.

  7. Estimation of PV energy production based on satellite data

    NASA Astrophysics Data System (ADS)

    Mazurek, G.

    2015-09-01

    Photovoltaic (PV) technology is an attractive source of power for systems without connection to power grid. Because of seasonal variations of solar radiation, design of such a power system requires careful analysis in order to provide required reliability. In this paper we present results of three-year measurements of experimental PV system located in Poland and based on polycrystalline silicon module. Irradiation values calculated from results of ground measurements have been compared with data from solar radiation databases employ calculations from of satellite observations. Good convergence level of both data sources has been shown, especially during summer. When satellite data from the same time period is available, yearly and monthly production of PV energy can be calculated with 2% and 5% accuracy, respectively. However, monthly production during winter seems to be overestimated, especially in January. Results of this work may be helpful in forecasting performance of similar PV systems in Central Europe and allow to make more precise forecasts of PV system performance than based only on tables with long time averaged values.

  8. Neutron field measurement at the Experimental Advanced Superconducting Tokamak using a Bonner sphere spectrometer

    NASA Astrophysics Data System (ADS)

    Hu, Zhimeng; Zhong, Guoqiang; Ge, Lijian; Du, Tengfei; Peng, Xingyu; Chen, Zhongjing; Xie, Xufei; Yuan, Xi; Zhang, Yimo; Sun, Jiaqi; Fan, Tieshuan; Zhou, Ruijie; Xiao, Min; Li, Kai; Hu, Liqun; Chen, Jun; Zhang, Hui; Gorini, Giuseppe; Nocente, Massimo; Tardocchi, Marco; Li, Xiangqing; Chen, Jinxiang; Zhang, Guohui

    2018-07-01

    The neutron field measurement was performed in the Experimental Advanced Superconducting Tokamak (EAST) experimental hall using a Bonner sphere spectrometer (BSS) based on a 3He thermal neutron counter. The measured spectra and the corresponding integrated neutron fluence and dose values deduced from the spectra at two exposed positions were compared to the calculated results obtained by a general Monte Carlo code MCNP5, and good agreements were found. The applicability of a homemade dose survey meter installed at EAST was also verified with the comparison of the ambient dose equivalent H*(10) values measured by the meter and BSS.

  9. Work Measurement as a Generalized Quantum Measurement

    NASA Astrophysics Data System (ADS)

    Roncaglia, Augusto J.; Cerisola, Federico; Paz, Juan Pablo

    2014-12-01

    We present a new method to measure the work w performed on a driven quantum system and to sample its probability distribution P (w ). The method is based on a simple fact that remained unnoticed until now: Work on a quantum system can be measured by performing a generalized quantum measurement at a single time. Such measurement, which technically speaking is denoted as a positive operator valued measure reduces to an ordinary projective measurement on an enlarged system. This observation not only demystifies work measurement but also suggests a new quantum algorithm to efficiently sample the distribution P (w ). This can be used, in combination with fluctuation theorems, to estimate free energies of quantum states on a quantum computer.

  10. Study of Vis/NIR spectroscopy measurement on acidity of yogurt

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli

    2006-09-01

    A fast measurement of pH of yogurt using Vis/NIR-spectroscopy techniques was established in order to measuring the acidity of yogurt rapidly. 27 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The pH of yogurt on positions scanned by spectrum was measured by a pH meter. The mathematical model between pH and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS) by using Unscramble V9.2. Then 25 unknown samples from 5 different brands were predicted based on the mathematical model. The result shows that The correlation coefficient of pH based on PLS model is more than 0.890, and standard error of calibration (SEC) is 0.037, standard error of prediction (SEP) is 0.043. Through predicting the pH of 25 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0918. The results show the good to excellent prediction performances. The Vis/NIR spectroscopy technique had a significant greater accuracy for determining the value of pH. It was concluded that the VisINIRS measurement technique can be used to measure pH of yogurt fast and accurately, and a new method for the measurement of pH of yogurt was established.

  11. An Illumination-Adaptive Colorimetric Measurement Using Color Image Sensor

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Lee, Jong-Hyub; Sohng, Kyu-Ik

    An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 3×3 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.

  12. Hail detection algorithm for the Global Precipitation Measuring mission core satellite sensors

    NASA Astrophysics Data System (ADS)

    Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic

    2017-04-01

    By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.

  13. Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25

    ERIC Educational Resources Information Center

    Kane, Michael T.

    2017-01-01

    By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…

  14. The performance of single and multi-collector ICP-MS instruments for fast and reliable 34S/32S isotope ratio measurements†

    PubMed Central

    Pröfrock, Daniel; Irrgeher, Johanna; Prohaska, Thomas

    2016-01-01

    The performance and validation characteristics of different single collector inductively coupled plasma mass spectrometers based on different technical principles (ICP-SFMS, ICP-QMS in reaction and collision modes, and ICP-MS/MS) were evaluated in comparison to the performance of MC ICP-MS for fast and reliable S isotope ratio measurements. The validation included the determination of LOD, BEC, measurement repeatability, within-lab reproducibility and deviation from certified values as well as a study on instrumental isotopic fractionation (IIF) and the calculation of the combined standard measurement uncertainty. Different approaches of correction for IIF applying external intra-elemental IIF correction (aka standard-sample bracketing) using certified S reference materials and internal inter-elemental IIF (aka internal standardization) correction using Si isotope ratios in MC ICP-MS are explained and compared. The resulting combined standard uncertainties of examined ICP-QMS systems were not better than 0.3–0.5% (uc,rel), which is in general insufficient to differentiate natural S isotope variations. Although the performance of the single collector ICP-SFMS is better (single measurement uc,rel = 0.08%), the measurement reproducibility (>0.2%) is the major limit of this system and leaves room for improvement. MC ICP-MS operated in the edge mass resolution mode, applying bracketing for correction of IIF, provided isotope ratio values with the highest quality (relative combined measurement uncertainty: 0.02%; deviation from the certified value: <0.002%). PMID:27812369

  15. Brachial artery stiffness estimation using ARTSENS.

    PubMed

    Kiran, V Raj; Nabeel, P M; Joseph, Jayaraj; Sivaprakasam, Mohanasankar

    2017-07-01

    Central and peripheral arteries stiffening prominently affect hemodynamics thus increasing the risk of coronary heart disease, chronic kidney disease and end stage renal disease. There are several commercially available non-invasive measurement technologies for the evaluation of stiffness that are expensive, demand dedicated expertise and fall short for mass screening. Considering this, we have developed ARTSENS ® , a highly compact and portable image-free ultrasound device for evaluation of arterial stiffness. The capability of the device to perform accurate measurements of carotid artery stiffness has been validated through extensive in-vivo studies. In this paper we demonstrate the feasibility of using ARTSENS ® for measuring brachial artery stiffness. An inter-operator repeatability study was done based on in-vivo experiments on 9 young healthy subjects. The study included measurement of distension, end diastolic lumen diameter, arterial compliance and stiffness index performed both on carotid artery and brachial artery by two operators successively. The degree of agreement between the measurements made by operators has been investigated based on Bland-Altman plots and paired t-test. The measurements were populated within the limits of agreement. No statistically significant difference (p-values from paired t-test for end-diastolic diameter, distension, stiffness index, arterial compliance were 0.36, 0.24, 0.47 and 0.11 respectively) was seen for the brachial artery measurements performed by the two operators. The correlation between the measurement made by the operators was highly significant (r=0.86, p-value=0.003).

  16. Bio-electrical impedance spectroscopy: alternatives for the conventional hand-to-foot measurements.

    PubMed

    Cox-Reijven, P L M; Van Kreel, B; Soeters, P B

    2002-04-01

    Bio-impedance spectroscopy (BIS) is a very attractive method for measuring body composition. The standard method measures impedance from hand to foot. However, in patients a hand or foot is not always accessible. In these cases alternative methods would be helpful. The objective of this study was to compare BIS measurements from hand to foot (HF) with foot to foot (FF) and hand to hand (HH) measurements as alternatives. Aims were firstly, to assess the relationship between resistance (R) values measured by the different methods, secondly, to study the influence of body geometry on this relationship and lastly, to assess the predictive capacity of the methods for measuring body fluid volumes. In 53 subjects with different degrees of obesity (mean BMI = 38; SD = 9 kg/m(2)) three BIS measurements were performed from HF, HH and FF with a Xitron 4000B machine. Resistances of extracellular (Recw) and intracellular water (Ricw) were extrapolated by fitting the data to a Cole-Cole plot. Total body water (TBW) and extracellular water (ECW) were measured by deuterium and bromide dilution respectively. Intracellular water (ICW) was calculated as TBW-ECW. Anthropometric measurements, including length and circumference of limbs and trunk, were performed as measures for body geometry. The Recw, Ricw and R50 values of HF measurements could be accurately described as a function of the Recw, Ricw and R50 values of HH or FF measurements. The relative circumference of arms and legs and the length of the trunk influenced the relationship between R values of the three different measurements. The degree of overweight did not affect this relationship. The precision of the predictions of TBW, ECW and ICW based on R values of the HH measurements were comparable with the traditional HF measurements while the FF measurements gave slightly less accurate results. Under circumstances where total body BIS measurements cannot be performed, FF or HH measurements may be used as alternatives. However, for clinical use the effect of changes in fluid distribution on the accuracy of these methods needs to be studied further. Copyright 2002 Elsevier Science Ltd. All rights reserved.

  17. Public magnetic field exposure based on internal current density for electric low voltage systems.

    PubMed

    Keikko, Tommi; Seesvuori, Reino; Hyvönen, Martti; Valkealahti, Seppo

    2009-04-01

    A measurement concept utilizing a new magnetic field exposure metering system has been developed for indoor substations where voltage is transformed from a medium voltage of 10 or 20 kV to a low voltage of 400 V. The new metering system follows the guidelines published by the International Commission on Non-Ionizing Radiation Protection. It can be used to measure magnetic field values, total harmonic distortion of the magnetic field, magnetic field exposure ratios for public and workers, load current values, and total harmonic distortion of the load current. This paper demonstrates how exposure to non-sinusoidal magnetic fields and magnetic flux density exposure values can be compared directly with limit values for internal current densities in a human body. Further, we present how the magnetic field and magnetic field exposure behaves in the vicinity of magnetic field sources within the indoor substation and in the neighborhood. Measured magnetic fields around the substation components have been used to develop a measurement concept by which long-term measurements in the substations were performed. Long-term measurements revealed interesting and partly unexpected dependencies between the measured quantities, which have been further analyzed. The principle of this paper is to substitute a demanding exposure measurement with measurements of the basic quantities like the 50 Hz fundamental magnetic field component, which can be estimated based on the load currents for certain classes of substation lay-out.

  18. Current measurement by Faraday effect on GEPOPU

    NASA Astrophysics Data System (ADS)

    N, Correa; H, Chuaqui; E, Wyndham; F, Veloso; J, Valenzuela; M, Favre; H, Bhuyan

    2014-05-01

    The design and calibration of an optical current sensor using BK7 glass is presented. The current sensor is based on the polarization rotation by Faraday effect. GEPOPU is a pulsed power generator, double transit time 120ns, 1.5 Ohm impedance, coaxial geometry, where Z pinch experiment are performed. The measurements were performed at the Optics and Plasma Physics Laboratory of Pontificia Universidad Catolica de Chile. The verdet constant for two different optical materials was obtained using He-Ne laser. The values obtained are within the experimental error bars of measurements published in the literature (less than 15% difference). Two different sensor geometries were tried. We present the preliminary results for one of the geometries. The values obtained for the current agree within the measurement error with those obtained by means of a Spice simulation of the generator. Signal traces obtained are completely noise free.

  19. Reference values of fractional excretion of exhaled nitric oxide among non-smokers and current smokers.

    PubMed

    Torén, Kjell; Murgia, Nicola; Schiöler, Linus; Bake, Björn; Olin, Anna-Carin

    2017-08-25

    Fractional exhaled nitric oxide (FE NO ) is used to assess of airway inflammation; diagnose asthma and monitor adherence to advised therapy. Reliable and accurate reference values for FE NO are needed for both non-smoking and current smoking adults in the clinical setting. The present study was performed to establish reference adult FE NO values among never-smokers, former smokers and current smokers. FE NO was measured in 5265 subjects aged 25-75 years in a general-population study, using a chemiluminescence (Niox ™) analyser according to the guidelines of the American Thoracic Society and the European Respiratory Society. Atopy was based on the presence of immunoglobulin E (IgE) antibodies to common inhalant allergens (measured using Phadiatop® test). Spirometry without bronchodilation was performed and forced vital capacity (FVC), forced expired volume in 1 s (FEV 1 ) and the ratio of FEV 1 to FVC values were obtained. After excluding subjects with asthma, chronic bronchitis, spirometric airway obstruction and current cold, 3378 subjects remained. Equations for predictions of FE NO values were modelled using nonparametric regression models. FE NO levels were similar in never-smokers and former smokers, and these two groups were therefore merged into a group termed "non-smokers". Reference equations, including the 5th and 95th percentiles, were generated for female and male non-smokers, based on age, height and atopy. Regression models for current smokers were unstable. Hence, the proposed reference values for current smokers are based on the univariate distribution of FE NO and fixed cut-off limits. Reference values for FE NO among respiratory healthy non-smokers should be outlined stratified for gender using individual reference values. For current smokers separate cut-off limits are proposed.

  20. CCQM-K90, formaldehyde in nitrogen, 2 μmol mol-1 Final report

    NASA Astrophysics Data System (ADS)

    Viallon, Joële; Flores, Edgar; Idrees, Faraz; Moussay, Philippe; Wielgosz, Robert Ian; Kim, D.; Kim, Y. D.; Lee, S.; Persijn, S.; Konopelko, L. A.; Kustikov, Y. A.; Malginov, A. V.; Chubchenko, I. K.; Klimov, A. Y.; Efremova, O. V.; Zhou, Z.; Possolo, A.; Shimosaka, T.; Brewer, P.; Macé, T.; Ferracci, Valerio; Brown, Richard J. C.; Aoki, Nobuyuki

    2017-01-01

    The CCQM-K90 comparison is designed to evaluate the level of comparability of national metrology institutes (NMI) or designated institutes (DI) measurement capabilities for formaldehyde in nitrogen at a nominal mole fraction of 2 μmol mol-1. The comparison was organised by the BIPM using a suite of gas mixtures prepared by a producer of specialty calibration gases. The BIPM assigned the formaldehyde mole fraction in the mixtures by comparison with primary mixtures generated dynamically by permeation coupled with continuous weighing in a magnetic suspension balance. The BIPM developed two dynamic sources of formaldehyde in nitrogen that provide two independent values of the formaldehyde mole fraction: the first one based on diffusion of trioxane followed by thermal conversion to formaldehyde, the second one based on permeation of formaldehyde from paraformaldehyde contained in a permeation tube. Two independent analytical methods, based on cavity ring down spectroscopy (CRDS) and Fourier transform infrared spectroscopy (FTIR) were used for the assignment procedure. Each participating institute was provided with one transfer standard and value assigned the formaldehyde mole fraction in the standard based on its own measurement capabilities. The stability of the formaldehyde mole fraction in transfer standards was deduced from repeated measurements performed at the BIPM before and after measurements performed at participating institutes. In addition, 5 control standards were kept at the BIPM for regular measurements during the course of the comparison. Temporal trends that approximately describe the linear decrease of the amount-of-substance fraction of formaldehyde in nitrogen in the transfer standards over time were estimated by two different mathematical treatments, the outcomes of which were proposed to participants. The two treatments also differed in the way measurement uncertainties arising from measurements performed at the BIPM were propagated to the uncertainty of the trend parameters, as well as how the dispersion of the dates when measurements were made by the participants was taken into account. Upon decision of the participants, the Key Comparison Reference Values were assigned by the BIPM using the largest uncertainty for measurements performed at the BIPM, linear regression without weight to calculate the trend parameters, and not taking into account the dispersion of dates for measurements made by the participant. Each transfer standard was assigned its own reference value and associated expanded uncertainty. An expression for the degree of equivalence between each participating institute and the KCRV was calculated from the comparison results and measurement uncertainties submitted by participating laboratories. Results of the alternative mathematical treatment are presented in annex of this report. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  1. [Quantitative evaluation of Gd-EOB-DTPA uptake in phantom study for liver MRI].

    PubMed

    Hayashi, Norio; Miyati, Tosiaki; Koda, Wataru; Suzuki, Masayuki; Sanada, Shigeru; Ohno, Naoki; Hamaguchi, Takashi; Matsuura, Yukihiro; Kawahara, Kazuhiro; Yamamoto, Tomoyuki; Matsui, Osamu

    2010-05-20

    Gd-EOB-DTPA is a new liver specific MRI contrast media. In the hepatobiliary phase, contrast media is trapped in normal liver tissue, a normal liver shows high intensity, tumor/liver contrast becomes high, and diagnostic ability improves. In order to indicate the degree of uptake of the contrast media, the enhancement ratio (ER) is calculated. The ER is obtained by calculating (signal intensity (SI) after injection-SI before injection) / SI before injection. However, because there is no linearity between contrast media concentration and SI, ER is not correctly estimated by this method. We discuss a method of measuring ER based on SI and T(1) values using the phantom. We used a column phantom, with an internal diameter of 3 cm, that was filled with Gd-EOB-DTPA diluted solution. Moreover, measurement of the T(1) value by the IR method was also performed. The ER measuring method of this technique consists of the following three components: 1) Measurement of ER based on differences in 1/T(1) values using the variable flip angle (FA) method, 2) Measurement of differences in SI, and 3) Measurement of differences in 1/T(1) values using the IR method. ER values calculated by these three methods were compared. In measurement made using the variable FA method and the IR method, linearity was found between contrast media concentration and ER. On the other hand, linearity was not found between contrast media concentration and SI. For calculation of ER using Gd-EOB-DTPA, a more correct ER is obtained by measuring the T(1) value using the variable FA method.

  2. Evaluating Prospective Teachers: Testing the Predictive Validity of the EdTPA

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Cowan, James; Theobald, Roddy

    2017-01-01

    We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…

  3. Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA. Working Paper 157

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Cowan, James; Theobald, Roddy

    2016-01-01

    We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…

  4. FPGA-Based X-Ray Detection and Measurement for an X-Ray Polarimeter

    NASA Technical Reports Server (NTRS)

    Gregory, Kyle; Hill, Joanne; Black, Kevin; Baumgartner, Wayne

    2013-01-01

    This technology enables detection and measurement of x-rays in an x-ray polarimeter using a field-programmable gate array (FPGA). The technology was developed for the Gravitational and Extreme Magnetism Small Explorer (GEMS) mission. It performs precision energy and timing measurements, as well as rejection of non-x-ray events. It enables the GEMS polarimeter to detect precisely when an event has taken place so that additional measurements can be made. The technology also enables this function to be performed in an FPGA using limited resources so that mass and power can be minimized while reliability for a space application is maximized and precise real-time operation is achieved. This design requires a low-noise, charge-sensitive preamplifier; a highspeed analog to digital converter (ADC); and an x-ray detector with a cathode terminal. It functions by computing a sum of differences for time-samples whose difference exceeds a programmable threshold. A state machine advances through states as a programmable number of consecutive samples exceeds or fails to exceed this threshold. The pulse height is recorded as the accumulated sum. The track length is also measured based on the time from the start to the end of accumulation. For track lengths longer than a certain length, the algorithm estimates the barycenter of charge deposit by comparing the accumulator value at the midpoint to the final accumulator value. The design also employs a number of techniques for rejecting background events. This innovation enables the function to be performed in space where it can operate autonomously with a rapid response time. This implementation combines advantages of computing system-based approaches with those of pure analog approaches. The result is an implementation that is highly reliable, performs in real-time, rejects background events, and consumes minimal power.

  5. Clinical tooth preparations and associated measuring methods: a systematic review.

    PubMed

    Tiu, Janine; Al-Amleh, Basil; Waddell, J Neil; Duncan, Warwick J

    2015-03-01

    The geometries of tooth preparations are important features that aid in the retention and resistance of cemented complete crowns. The clinically relevant values and the methods used to measure these are not clear. The purpose of this systematic review was to retrieve, organize, and critically appraise studies measuring clinical tooth preparation parameters, specifically the methodology used to measure the preparation geometry. A database search was performed in Scopus, PubMed, and ScienceDirect with an additional hand search on December 5, 2013. The articles were screened for inclusion and exclusion criteria and information regarding the total occlusal convergence (TOC) angle, margin design, and associated measuring methods were extracted. The values and associated measuring methods were tabulated. A total of 1006 publications were initially retrieved. After removing duplicates and filtering by using exclusion and inclusion criteria, 983 articles were excluded. Twenty-three articles reported clinical tooth preparation values. Twenty articles reported the TOC, 4 articles reported margin designs, 4 articles reported margin angles, and 3 articles reported the abutment height of preparations. A variety of methods were used to measure these parameters. TOC values seem to be the most important preparation parameter. Recommended TOC values have increased over the past 4 decades from an unachievable 2- to 5-degree taper to a more realistic 10 to 22 degrees. Recommended values are more likely to be achieved under experimental conditions if crown preparations are performed outside of the mouth. We recommend that a standardized measurement method based on the cross sections of crown preparations and standardized reporting be developed for future studies analyzing preparation geometry. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  6. An efficient diagnosis system for Parkinson's disease using kernel-based extreme learning machine with subtractive clustering features weighting approach.

    PubMed

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua

    2014-01-01

    A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance.

  7. An Efficient Diagnosis System for Parkinson's Disease Using Kernel-Based Extreme Learning Machine with Subtractive Clustering Features Weighting Approach

    PubMed Central

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua

    2014-01-01

    A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance. PMID:25484912

  8. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  9. Application of OMI NO2 for Regional Air Quality Model Evaluation

    NASA Astrophysics Data System (ADS)

    Holloway, T.; Bickford, E.; Oberman, J.; Scotty, E.; Clifton, O. E.

    2012-12-01

    To support the application of satellite data for air quality analysis, we examine how column NO2 measurements from the Ozone Monitoring Instrument (OMI) aboard the NASA Aura satellite relate to ground-based and model estimates of NO2 and related species. Daily variability, monthly mean values, and spatial gradients in OMI NO2 from the Netherlands Royal Meteorological Institute (KNMI) are compared to ground-based measurements of NO2 from the EPA Air Quality System (AQS) database. Satellite data is gridded to two resolutions typical of regional air quality models - 36 km x 36 km over the continental U.S., and 12 km x 12 km over the Upper Midwestern U.S. Gridding is performed using the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS), a publicly available software to support gridding of satellite data to model grids. Comparing daily OMI retrievals (13:45 daytime local overpass time) with ground-based measurements (13:00), we find January and July 2007 correlation coefficients (r-values) generally positive, with values higher in the winter (January) than summer (July) for most sites. Incidences of anti-correlation or low-correlation are evaluated with model simulations from the U.S. EPA Community Multiscale Air Quality Model version 4.7 (CMAQ). OMI NO2 is also used to evaluate CMAQ output, and to compare performance metrics for CMAQ relative to AQS measurements. We compare simulated NO2 across both the U.S. and Midwest study domains with both OMI NO2 (total column CMAQ values, weighted with the averaging kernel) and with ground-based observations (lowest model layer CMAQ values). 2007 CMAQ simulations employ emissions from the Lake Michigan Air Directors Consortium (LADCO) and meteorology from the Weather Research and Forecasting (WRF) model. Over most of the U.S., CMAQ is too high in January relative to OMI NO2, but too low in January relative to AQS NO2. In contrast, CMAQ is too low in July relative to OMI NO2, but too high relative to AQS NO2. These biases are used to evaluate emission sources (and the importance of missing sources, such as lightning NOx), and to explain model performance for related secondary species, especially nitrate aerosol and ozone.

  10. The imperative of culture: a quantitative analysis of the impact of culture on workforce engagement, patient experience, physician engagement, value-based purchasing, and turnover.

    PubMed

    Owens, Katie; Eggers, Jim; Keller, Stephanie; McDonald, Audrey

    2017-01-01

    Current uncertainty for the future of the health care landscape is placing an increasing amount of pressure on leadership teams to be prepared to steer their organization forward in a number of potential directions. It is commonly recognized among health care leaders that culture will either enable or disable organizational success. However, very few studies empirically link culture to health care-specific performance outcomes. Nearly every health care organization in the US specifies its cultural aspirations through mission and vision statements and values. Ambitions of patient-centeredness, care for the community, workplace of choice, and world-class quality are frequently cited; yet, little definitive research exists to quantify the importance of building high-performing cultures. Our study examined the impact of cultural attributes defined by a culture index (Cronbach's alpha = 0.88) on corresponding performance with key health care measures. We mapped results of the culture index across data sets, compared results, and evaluated variations in performance among key indicators for leaders. Organizations that perform in the top quartile for our culture index statistically significantly outperformed those in the bottom quartile on all but one key performance indicator tested. The culture top quartile organizations outperformed every domain for employee engagement, physician engagement, patient experience, and overall value-based purchasing performance with statistical significance. Culture index top quartile performers also had a 3.4% lower turnover rate than the bottom quartile performers. Finally, culture index top quartile performers earned an additional 1% on value-based purchasing. Our findings demonstrate a meaningful connection between performance in the culture index and organizational performance. To best impact these key performance outcomes, health care leaders should pay attention to culture and actively steer workforce engagement in attributes that represent the culture index, such as treating patients as valued customers, having congruency between employee and organizational values, promoting employee pride, and encouraging the feeling that being a member of the organization is rewarding, in order to leverage culture as a competitive advantage.

  11. The imperative of culture: a quantitative analysis of the impact of culture on workforce engagement, patient experience, physician engagement, value-based purchasing, and turnover

    PubMed Central

    Owens, Katie; Eggers, Jim; Keller, Stephanie; McDonald, Audrey

    2017-01-01

    Current uncertainty for the future of the health care landscape is placing an increasing amount of pressure on leadership teams to be prepared to steer their organization forward in a number of potential directions. It is commonly recognized among health care leaders that culture will either enable or disable organizational success. However, very few studies empirically link culture to health care-specific performance outcomes. Nearly every health care organization in the US specifies its cultural aspirations through mission and vision statements and values. Ambitions of patient-centeredness, care for the community, workplace of choice, and world-class quality are frequently cited; yet, little definitive research exists to quantify the importance of building high-performing cultures. Our study examined the impact of cultural attributes defined by a culture index (Cronbach’s alpha = 0.88) on corresponding performance with key health care measures. We mapped results of the culture index across data sets, compared results, and evaluated variations in performance among key indicators for leaders. Organizations that perform in the top quartile for our culture index statistically significantly outperformed those in the bottom quartile on all but one key performance indicator tested. The culture top quartile organizations outperformed every domain for employee engagement, physician engagement, patient experience, and overall value-based purchasing performance with statistical significance. Culture index top quartile performers also had a 3.4% lower turnover rate than the bottom quartile performers. Finally, culture index top quartile performers earned an additional 1% on value-based purchasing. Our findings demonstrate a meaningful connection between performance in the culture index and organizational performance. To best impact these key performance outcomes, health care leaders should pay attention to culture and actively steer workforce engagement in attributes that represent the culture index, such as treating patients as valued customers, having congruency between employee and organizational values, promoting employee pride, and encouraging the feeling that being a member of the organization is rewarding, in order to leverage culture as a competitive advantage. PMID:29355220

  12. Guiding Principles and Checklist for Population-Based Quality Metrics

    PubMed Central

    Brunelli, Steven M.; Maddux, Franklin W.; Parker, Thomas F.; Johnson, Douglas; Nissenson, Allen R.; Collins, Allan; Lacson, Eduardo

    2014-01-01

    The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. PMID:24558050

  13. Measuring Provider Performance for Physicians Participating in the Merit-Based Incentive Payment System.

    PubMed

    Squitieri, Lee; Chung, Kevin C

    2017-07-01

    In 2017, the Centers for Medicare and Medicaid Services began requiring all eligible providers to participate in the Quality Payment Program or face financial reimbursement penalty. The Quality Payment Program outlines two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. For the first performance period beginning in January of 2017, the Centers for Medicare and Medicaid Services estimates that approximately 83 to 90 percent of eligible providers will not qualify for participation in an Advanced Alternative Payment Model and therefore must participate in the Merit-Based Incentive Payment System program. The Merit-Based Incentive Payment System path replaces existing quality-reporting programs and adds several new measures to evaluate providers using four categories of data: (1) quality, (2) cost/resource use, (3) improvement activities, and (4) advancing care information. These categories will be combined to calculate a weighted composite score for each provider or provider group. Composite Merit-Based Incentive Payment System scores based on 2017 performance data will be used to adjust reimbursed payment in 2019. In this article, the authors provide relevant background for understanding value-based provider performance measurement. The authors also discuss Merit-Based Incentive Payment System reporting requirements and scoring methodology to provide plastic surgeons with the necessary information to critically evaluate their own practice capabilities in the context of current performance metrics under the Quality Payment Program.

  14. Photovoltaic-Model-Based Solar Irradiance Estimators: Performance Comparison and Application to Maximum Power Forecasting

    NASA Astrophysics Data System (ADS)

    Scolari, Enrica; Sossan, Fabrizio; Paolone, Mario

    2018-01-01

    Due to the increasing proportion of distributed photovoltaic (PV) production in the generation mix, the knowledge of the PV generation capacity has become a key factor. In this work, we propose to compute the PV plant maximum power starting from the indirectly-estimated irradiance. Three estimators are compared in terms of i) ability to compute the PV plant maximum power, ii) bandwidth and iii) robustness against measurements noise. The approaches rely on measurements of the DC voltage, current, and cell temperature and on a model of the PV array. We show that the considered methods can accurately reconstruct the PV maximum generation even during curtailment periods, i.e. when the measured PV power is not representative of the maximum potential of the PV array. Performance evaluation is carried out by using a dedicated experimental setup on a 14.3 kWp rooftop PV installation. Results also proved that the analyzed methods can outperform pyranometer-based estimations, with a less complex sensing system. We show how the obtained PV maximum power values can be applied to train time series-based solar maximum power forecasting techniques. This is beneficial when the measured power values, commonly used as training, are not representative of the maximum PV potential.

  15. HbA1c values calculated from blood glucose levels using truncated Fourier series and implementation in standard SQL database language.

    PubMed

    Temsch, W; Luger, A; Riedl, M

    2008-01-01

    This article presents a mathematical model to calculate HbA1c values based on self-measured blood glucose and past HbA1c levels, thereby enabling patients to monitor diabetes therapy between scheduled checkups. This method could help physicians to make treatment decisions if implemented in a system where glucose data are transferred to a remote server. The method, however, cannot replace HbA1c measurements; past HbA1c values are needed to gauge the method. The mathematical model of HbA1c formation was developed based on biochemical principles. Unlike an existing HbA1c formula, the new model respects the decreasing contribution of older glucose levels to current HbA1c values. About 12 standard SQL statements embedded in a php program were used to perform Fourier transform. Regression analysis was used to gauge results with previous HbA1c values. The method can be readily implemented in any SQL database. The predicted HbA1c values thus obtained were in accordance with measured values. They also matched the results of the HbA1c formula in the elevated range. By contrast, the formula was too "optimistic" in the range of better glycemic control. Individual analysis of two subjects improved the accuracy of values and reflected the bias introduced by different glucometers and individual measurement habits.

  16. Advanced Satellite-Based Frequency Transfer at the 10-16 Level.

    PubMed

    Fujieda, Miho; Yang, Sung-Hoon; Gotoh, Tadahiro; Hwang, Sang-Wook; Hachisu, Hidekazu; Kim, Huidong; Lee, Young Kyu; Tabuchi, Ryo; Ido, Tetsuya; Lee, Won-Kyu; Heo, Myoung-Sun; Park, Chang Yong; Yu, Dai-Hyuk; Petit, Gerard

    2018-06-01

    Advanced satellite-based frequency transfers by two-way carrier-phase (TWCP) and integer precise point positioning have been performed between the National Institute of Information and Communications Technology and Korea Research Institute of Standards and Science. We confirm that the disagreement between them is less than at an averaging time of several days. In addition, an overseas frequency ratio measurement of Sr and Yb optical lattice clocks was directly performed by TWCP. We achieved an uncertainty at the mid-10 -16 level after a total measurement time of 12 h. The frequency ratio was consistent with the recently reported values within the uncertainty.

  17. Measurement of vascular wall attenuation: comparison of CT angiography using model-based iterative reconstruction with standard filtered back-projection algorithm CT in vitro.

    PubMed

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2012-11-01

    To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Commentary: A call to leadership: the role of the academic medical center in driving sustainable health system improvement through performance measurement.

    PubMed

    Nedza, Susan M

    2009-12-01

    As the government attempts to address the high cost of health care in the United States, the issues being confronted include variations in the quality of care administered and the inconsistent application of scientifically proven treatments. To improve quality, methods of measurement and reporting with rewards or, eventually, penalties based on performance, must be developed. To date, well-intentioned national policy initiatives, such as value-based purchasing, have focused primarily on the measurement of discrete events and on attempts to construct incentives. While important, the current approach alone cannot improve quality, ensure equitability, decrease variability, and optimize value. Additional thought-leadership is required, both theoretical and applied. Academic medical centers' (AMCs') scholarly and practical participation is needed. Although quality cannot be sustainably improved without measurement, the existing measures alone do not ensure quality. There is not enough evidence to support strong measure development and, further, not enough insight regarding whether the existing measures have their intended effect of enhancing health care delivery that results in quality outcomes for patients. Perhaps the only way that the United States health care system will achieve a standard of quality care is through the strong embrace, effective engagement, intellectual insights, educational contributions, and practical applications in AMCs. Quality will never be achieved through public policies or national initiatives alone but instead through the commitment of the academic community to forward the science of performance measurement and to ensure that measurement leads to better health outcomes for our nation.

  19. Comparison of cerebral tissue oxygenation values in full term and preterm newborns by the simultaneous use of two near-infrared spectroscopy devices: an absolute and a relative trending oximeter

    NASA Astrophysics Data System (ADS)

    Szczapa, Tomasz; Karpiński, Łukasz; Moczko, Jerzy; Weindling, Michael; Kornacka, Alicja; Wróblewska, Katarzyna; Adamczak, Aleksandra; Jopek, Aleksandra; Chojnacka, Karolina; Gadzinowski, Janusz

    2013-08-01

    The aim of this study is to compare a two-wavelength light emitting diode-based tissue oximeter (INVOS), which is designed to show trends in tissue oxygenation, with a four-wavelength laser-based oximeter (FORE-SIGHT), designed to deliver absolute values of tissue oxygenation. Simultaneous values of cerebral tissue oxygenation (StO2) are measured using both devices in 15 term and 15 preterm clinically stable newborns on the first and third day of life. Values are recorded simultaneously in two periods between which oximeter sensor positions are switched to the contralateral side. Agreement between StO2 values before and after the change of sensor position is analyzed. We find that mean cerebral StO2 values are similar between devices for term and preterm babies, but INVOS shows StO2 values spread over a wider range, with wider standard deviations than shown by the FORE-SIGHT. There is relatively good agreement with a bias up to 3.5% and limits of agreement up to 11.8%. Measurements from each side of the forehead show better repeatability for the FORE-SIGHT monitor. We conclude that performance of the two devices is probably acceptable for clinical purposes. Both performed sufficiently well, but the use of FORE-SIGHT may be associated with tighter range and better repeatability of data.

  20. Proposal for a Conceptual Model for Evaluating Lean Product Development Performance: A Study of LPD Enablers in Manufacturing Companies

    NASA Astrophysics Data System (ADS)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    The instability in today's market and the emerging demands for mass customized products by customers, are driving companies to seek for cost effective and time efficient improvements in their production system and this have led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. Among such developmental architecture adopted, is the integration of lean thinking in the product development process. However, due to lack of clear understanding of the lean performance and its measurements, many companies are unable to implement and fully integrate the lean principle into their product development process and without a proper performance measurement, the performance level of the organizational value stream will be unknown and the specific area of improvement as it relates to the LPD program cannot be tracked. Hence, it will result in poor decision making in the LPD implementation. This paper therefore seeks to present a conceptual model for evaluation of LPD performances by identifying and analysing the core existing LPD enabler (Chief Engineer, Cross-functional teams, Set-based engineering, Poka-yoke (mistakeproofing), Knowledge-based environment, Value-focused planning and development, Top management support, Technology, Supplier integration, Workforce commitment and Continuous improvement culture) for assessing the LPD performance.

  1. Combined Study of Snow Depth Determination and Winter Leaf Area Index Retrieval by Unmanned Aerial Vehicle Photogrammetry

    NASA Astrophysics Data System (ADS)

    Lendzioch, Theodora; Langhammer, Jakub; Jenicek, Michal

    2017-04-01

    A rapid and robust approach using Unmanned Aerial Vehicle (UAV) digital photogrammetry was performed for evaluating snow accumulation over different small localities (e.g. disturbed forest and open area) and for indirect field measurements of Leaf Area Index (LAI) of coniferous forest within the Šumava National Park, Czech Republic. The approach was used to reveal impacts related to changes in forest and snowpack and to determine winter effective LAI for monitoring the impact of forest canopy metrics on snow accumulation. Due to the advancement of the technique, snow depth and volumetric changes of snow depth over these selected study areas were estimated at high spatial resolution (1 cm) by subtracting a snow-free digital elevation model (DEM) from a snow-covered DEM. Both, downward-looking UAV images and upward-looking digital hemispherical photography (DHP), and additional widely used LAI-2200 canopy analyser measurements were applied to determine the winter LAI, controlling interception and transmitting radiation. For the performance of downward-looking UAV images the snow background instead of the sky fraction was used. The reliability of UAV-based LAI retrieval was tested by taking an independent data set during the snow cover mapping campaigns. The results showed the potential of digital photogrammetry for snow depth mapping and LAI determination by UAV techniques. The average difference obtained between ground-based and UAV-based measurements of snow depth was 7.1 cm with higher values obtained by UAV. The SD of 22 cm for the open area seemed competitive with the typical precision of point measurements. In contrast, the average difference in disturbed forest area was 25 cm with lower values obtained by UAV and a SD of 36 cm, which is in agreement with other studies. The UAV-based LAI measurements revealed the lowest effective LAI values and the plant canopy analyser LAI-2200 the highest effective LAI values. The biggest bias of effective LAI was observed between LAI-2200 and UAV-based analyses. Since the LAI parameter is important for snowpack modelling, this method presents the potential of simplifying LAI retrieval and mapping of snow dynamics while reducing running costs and time.

  2. Value-based care in hepatology.

    PubMed

    Strazzabosco, Mario; Allen, John I; Teisberg, Elizabeth O

    2017-05-01

    The migration from legacy fee-for-service reimbursement to payments linked to high-value health care is accelerating in the United States because of new legislation and redesign of payments from the Centers for Medicare and Medicaid Services. Because patients with chronic diseases account for substantial use of health care resources, payers and health systems are focusing on maximizing the value of care for these patients. Because chronic liver diseases impose a major health burden worldwide affecting the health and lives of many individuals and families as well as substantial costs for individuals and payers, hepatologists must understand how they can improve their practices. Hepatologists practice a high-intensity cognitive subspecialty, using complex and costly procedures and medications. High-value patient care requires multidisciplinary coordination, labor-intensive support for critically ill patients, and effective chronic disease management. Under current fee-for-service reimbursement, patient values, medical success, and financial success can all be misaligned. Many current attempts to link health outcomes to reimbursement are based on compliance with process measures, with less emphasis on outcomes that matter most to patients, thus slowing transformation to higher-value team-based care. Outcome measures that reflect the entire cycle of care are needed to assist both clinicians and administrators in improving the quality and value of care. A comprehensive set of outcome measures for liver diseases is not currently available. Numerous researchers now are attempting to fill this gap by devising and testing outcome indicators and patient-reported outcomes for the major liver conditions. These indicators will provide tools to implement a value-based approach for patients with chronic liver diseases to compare results and value of care between referral centers, to perform health technology assessment, and to guide decision-making processes for health authorities. This review sets the groundwork for implementing a value-based, patient-centered approach to chronic liver diseases within a health system. (Hepatology 2017;65:1749-1755). © 2017 by the American Association for the Study of Liver Diseases.

  3. Promoting Health Equity And Eliminating Disparities Through Performance Measurement And Payment.

    PubMed

    Anderson, Andrew C; O'Rourke, Erin; Chin, Marshall H; Ponce, Ninez A; Bernheim, Susannah M; Burstin, Helen

    2018-03-01

    Current approaches to health care quality have failed to reduce health care disparities. Despite dramatic increases in the use of quality measurement and associated payment policies, there has been no notable implementation of measurement strategies to reduce health disparities. The National Quality Forum developed a road map to demonstrate how measurement and associated policies can contribute to eliminating disparities and promote health equity. Specifically, the road map presents a four-part strategy whose components are identifying and prioritizing areas to reduce health disparities, implementing evidence-based interventions to reduce disparities, investing in the development and use of health equity performance measures, and incentivizing the reduction of health disparities and achievement of health equity. To demonstrate how the road map can be applied, we present an example of how measurement and value-based payment can be used to reduce racial disparities in hypertension among African Americans.

  4. 10 CFR 1.35 - Office of Information Services.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... are managed in a manner consistent with Federal information resources management (IRM) laws and regulations; (c) Assists senior management in recognizing where information technology can add value while... information technology and information management programs based on applicable performance measures and...

  5. 10 CFR 1.35 - Office of Information Services.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... are managed in a manner consistent with Federal information resources management (IRM) laws and regulations; (c) Assists senior management in recognizing where information technology can add value while... information technology and information management programs based on applicable performance measures and...

  6. 10 CFR 1.35 - Office of Information Services.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... are managed in a manner consistent with Federal information resources management (IRM) laws and regulations; (c) Assists senior management in recognizing where information technology can add value while... information technology and information management programs based on applicable performance measures and...

  7. 10 CFR 1.35 - Office of Information Services.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... are managed in a manner consistent with Federal information resources management (IRM) laws and regulations; (c) Assists senior management in recognizing where information technology can add value while... information technology and information management programs based on applicable performance measures and...

  8. 10 CFR 1.35 - Office of Information Services.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... are managed in a manner consistent with Federal information resources management (IRM) laws and regulations; (c) Assists senior management in recognizing where information technology can add value while... information technology and information management programs based on applicable performance measures and...

  9. Ultrasound-enhanced bioscouring of greige cotton: regression analysis of process factors

    USDA-ARS?s Scientific Manuscript database

    Ultrasound-enhanced bioscouring process factors for greige cotton fabric are examined using custom experimental design utilizing statistical principles. An equation is presented which predicts bioscouring performance based upon percent reflectance values obtained from UV-Vis measurements of rutheniu...

  10. Harmonisation of serum dihydrotestosterone analysis: establishment of an external quality assurance program.

    PubMed

    Greaves, Ronda F; Jolly, Lisa; Hartmann, Michaela F; Ho, Chung Shun; Kam, Richard K T; Joseph, John; Boyder, Conchita; Wudy, Stefan A

    2017-03-01

    Serum dihydrotestosterone (DHT) is an important analyte for the clinical assessment of disorders of sex development. It is also reportedly a difficult analyte to measure. Currently, there are significant gaps in the standardisation of this analyte, including no external quality assurance (EQA) program available worldwide to allow for peer review performance of DHT. We therefore proposed to establish a pilot EQA program for serum DHT. DHT was assessed in the 2015 Royal College of Pathologists of Australasia Quality Assurance Programs' Endocrine program material. The material's target (i.e. "true") values were established using a measurement procedure based on isotope dilution gas chromatography (GC) tandem mass spectrometry (MS/MS). DHT calibrator values were based on weighed values of pure DHT material (>97.5% purity) from Sigma. The allowable limits of performance (ALP) were established as ±0.1 up to 0.5 nmol/L and ±15% for targets >0.5 nmol/L. Target values for the six levels of RCPAQAP material for DHT ranged from 0.02 to 0.43 nmol/L (0.01-0.12 ng/mL). The material demonstrated linearity across the six levels. There were seven participating laboratories for this pilot study. Results of the liquid chromatography (LC) MS/MS methods were within the ALP; whereas the results from the immunoassay methods were consistently higher than the target values and outside the ALP. This report provides the first peer comparison of serum DHT measured by mass spectrometry (MS) and immunoassay laboratories. Establishment of this program provides one of the pillars to achieve method harmonisation. This supports accurate clinical decisions where DHT measurement is required.

  11. Experimental characterization of Fresnel-Köhler concentrators

    NASA Astrophysics Data System (ADS)

    Zamora, Pablo; Benítez, Pablo; Mohedano, Rubén; Cvetković, Aleksandra; Vilaplana, Juan; Li, Yang; Hernández, Maikel; Chaves, Julio; Miñano, Juan C.

    2012-01-01

    Most cost-effective concentrated photovoltaics (CPV) systems are based on an optical train comprising two stages, the first being a Fresnel lens. Among them, the Fresnel-Köhler (FK) concentrator stands out owing to both performance and practical reasons. We describe the experimental measurements procedure for FK concentrator modules. This procedure includes three main types of measurements: electrical efficiency, acceptance angle, and irradiance uniformity at the solar cell plane. We have collected here the performance features of two different FK prototypes (ranging different f-numbers, concentration ratios, and cell sizes). The electrical efficiencies measured in both prototypes are high and fit well with the models, achieving values up to 32.7% (temperature corrected, and with no antireflective coating on SOE or POE surfaces) in the best case. The measured angular transmission curves show large acceptance angles, again perfectly matching the expected values [measured concentration acceptance product (CAP) values over 0.56]. The irradiance pattern on the cell (obtained with a digital camera) shows an almost perfectly uniform distribution, as predicted by raytrace simulations. All these excellent on-sun results confirm the FK concentrator as a potentially cost-effective solution for the CPV market.

  12. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  13. Measurement-based model of a wide-bore CT scanner for Monte Carlo dosimetric calculations with GMCTdospp software.

    PubMed

    Skrzyński, Witold

    2014-11-01

    The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. N2/O2/H2 Dual-Pump Cars: Validation Experiments

    NASA Technical Reports Server (NTRS)

    OByrne, S.; Danehy, P. M.; Cutler, A. D.

    2003-01-01

    The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.

  15. International comparison of experience-based health state values at the population level.

    PubMed

    Heijink, Richard; Reitmeir, Peter; Leidl, Reiner

    2017-07-07

    Decision makers need to know whether health state values, an important component of summary measures of health, are valid for their target population. A key outcome is the individuals' valuation of their current health. This experience-based perspective is increasingly used to derive health state values. This study is the first to compare such experience-based valuations at the population level across countries. We examined the relationship between respondents' self-rated health as measured by the EQ-VAS, and the different dimensions and levels of the EQ-5D-3 L. The dataset included almost 32,000 survey respondents from 15 countries. We estimated generalized linear models with logit link function, including country-specific models and pooled-data models with country effects. The results showed significant and meaningful differences in the valuation of health states and individual health dimensions between countries, even though similarities were present too. Between countries, coefficients correlated positively for the values of mobility, self-care and usual activities, but not for the values of pain and anxiety, thus underlining structural differences. The findings indicate that, ideally, population-specific experience-based value sets are developed and used for the calculation of health outcomes. Otherwise, sensitivity analyses are needed. Furthermore, transferring the results of foreign studies into the national context should be performed with caution. We recommend future studies to investigate the causes of differences in experience-based health state values through a single international study possibly complemented with qualitative research on the determinants of valuation.

  16. AUI&GIV: Recommendation with Asymmetric User Influence and Global Importance Value.

    PubMed

    Zhao, Zhi-Lin; Wang, Chang-Dong; Lai, Jian-Huang

    2016-01-01

    The user-based collaborative filtering (CF) algorithm is one of the most popular approaches for making recommendation. Despite its success, the traditional user-based CF algorithm suffers one serious problem that it only measures the influence between two users based on their symmetric similarities calculated by their consumption histories. It means that, for a pair of users, the influences on each other are the same, which however may not be true. Intuitively, an expert may have an impact on a novice user but a novice user may not affect an expert at all. Besides, each user may possess a global importance factor that affects his/her influence to the remaining users. To this end, in this paper, we propose an asymmetric user influence model to measure the directed influence between two users and adopt the PageRank algorithm to calculate the global importance value of each user. And then the directed influence values and the global importance values are integrated to deduce the final influence values between two users. Finally, we use the final influence values to improve the performance of the traditional user-based CF algorithm. Extensive experiments have been conducted, the results of which have confirmed that both the asymmetric user influence model and global importance value play key roles in improving recommendation accuracy, and hence the proposed method significantly outperforms the existing recommendation algorithms, in particular the user-based CF algorithm on the datasets of high rating density.

  17. AUI&GIV: Recommendation with Asymmetric User Influence and Global Importance Value

    PubMed Central

    Zhao, Zhi-Lin; Wang, Chang-Dong; Lai, Jian-Huang

    2016-01-01

    The user-based collaborative filtering (CF) algorithm is one of the most popular approaches for making recommendation. Despite its success, the traditional user-based CF algorithm suffers one serious problem that it only measures the influence between two users based on their symmetric similarities calculated by their consumption histories. It means that, for a pair of users, the influences on each other are the same, which however may not be true. Intuitively, an expert may have an impact on a novice user but a novice user may not affect an expert at all. Besides, each user may possess a global importance factor that affects his/her influence to the remaining users. To this end, in this paper, we propose an asymmetric user influence model to measure the directed influence between two users and adopt the PageRank algorithm to calculate the global importance value of each user. And then the directed influence values and the global importance values are integrated to deduce the final influence values between two users. Finally, we use the final influence values to improve the performance of the traditional user-based CF algorithm. Extensive experiments have been conducted, the results of which have confirmed that both the asymmetric user influence model and global importance value play key roles in improving recommendation accuracy, and hence the proposed method significantly outperforms the existing recommendation algorithms, in particular the user-based CF algorithm on the datasets of high rating density. PMID:26828803

  18. Determination of uncertainties associated to the in vivo measurement of iodine-131 in the thyroid.

    PubMed

    Dantas, B M; Lima, F F; Dantas, A L; Lucena, E A; Gontijo, R M G; Carvalho, C B; Hazin, C

    2016-07-01

    Intakes of radionuclides can be estimated through in vivo measurements, and the uncertainties associated to the measured activities should be clearly stated in monitoring program reports. This study aims to evaluate the uncertainties of in vivo monitoring of iodine 131 in the thyroid. The reference values for high-energy photons are based on the IDEAS Guide. Measurements were performed at the In Vivo Monitoring Laboratory of the Institute of Radiation Protection and Dosimetry (IRD) and at the Internal Dosimetry Laboratory of the Regional Center of Nuclear Sciences (CRCN-NE). In both institutions, the experiment was performed using a NaI(Tl) 3''3″ scintillation detector and a neck-thyroid phantom. Scattering factors were calculated and compared in different counting geometries. The results show that the technique produces reproducibility equivalent to the values suggested in the IDEAS Guide and measurement uncertainties is comparable to international quality standards for this type of in vivo monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Relative value unit-based compensation incentivization in an academic vascular practice improves productivity with no early adverse impact on quality.

    PubMed

    Awad, Nadia; Caputo, Francis J; Carpenter, Jeffrey P; Alexander, James B; Trani, José L; Lombardi, Joseph V

    2017-02-01

    Given the increased pressure from governmental programs to restructure reimbursements to reflect quality metrics achieved by physicians, review of current reimbursement schemes is necessary to ensure sustainability of the physician's performance while maintaining and ultimately improving patient outcomes. This study reviewed the impact of reimbursement incentives on evidence-based care outcomes within a vascular surgical program at an academic tertiary care center. Data for patients with a confirmed 30-day follow-up for the vascular surgery subset of our institution's National Surgical Quality Improvement Program submission for the years 2013 and 2014 were reviewed. The outcomes reviewed included 30-day mortality, readmission, unplanned returns to the operating room, and all major morbidities. A comparison of both total charges and work relative value units (RVUs) generated was performed before and after changes were made from a salary-based to a productivity-based compensation model. P value analysis was used to determine if there were any statistically significant differences in patient outcomes between the two study years. No statistically significant difference in outcomes of the core measures studied was identified between the two periods. There was a trend toward a lower incidence of respiratory complications, largely driven by a lower incidence in pneumonia between 2013 and 2014. The vascular division had a net increase of 8.2% in total charges and 5.7% in work RVUs after the RVU-based incentivization program was instituted. Revenue-improving measures can improve sustainability of a vascular program without negatively affecting patient care as evidenced by the lack of difference in evidence-based core outcome measures in our study period. Further studies are needed to elucidate the long-term effects of incentivization programs on both patient care and program viability. Copyright © 2016. Published by Elsevier Inc.

  20. 76 FR 39457 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ...% of the investable U.S. equity market. \\10\\ The Russell 3000 Value Index measures the performance of... largest securities based on a combination of their market cap and current index membership. The Russell... approximately 200 of the largest securities based on a combination of their market cap and current index...

  1. The Future Value of Serious Games for Assessment: Where Do We Go Now?

    ERIC Educational Resources Information Center

    de Klerk, Sebastiaan; Kato, Pamela M.

    2017-01-01

    Game-based assessments will most likely be an increasing part of testing programs in future generations because they provide promising possibilities for more valid and reliable measurement of students' skills as compared to the traditional methods of assessment like paper-and-pencil tests or performance-based assessments. The current status of…

  2. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  3. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    NASA Astrophysics Data System (ADS)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  4. Do body mass index and fat volume influence vocal quality, phonatory range, and aerodynamics in females?

    PubMed

    Barsties, Ben; Verfaillie, Rudi; Roy, Nelson; Maryn, Youri

    2013-01-01

    To analyze the impact of body weight and body fat volume on selected parameters of vocal quality, phonatory range, and aerodynamics in females. Based on measurements of body mass index in combination with body fat volume, 29 normophonic female subjects were classified as normal weight, underweight, and obese. Voice quality was investigated via auditory-perceptual ratings of breathiness, roughness, and overall dysphonia severity, via various acoustic measures and a multiparametric index. Phonatory range performance was examined using selected measures of the voice range profile and speech range profile. Measures of vocally relevant aerodynamics included vital capacity (i.e., VC), expected VC, phonation quotient, and maximum phonation time (i.e., MPT). Significant differences between the three weight groups were found across several measures of intensity, VC, MPT, and shimmer. As compared to the other groups, significantly higher values of maximum and minimum intensity levels, as well as sound pressure level during habitual running speech were observed for the obese group (all p-values<0.05); whereas, the underweight group had significantly lower values for VC and ratio of expected to measured VC (p-values<0.01). Furthermore, underweight subjects differed significantly as compared to normal weight subjects with lower MPT (p=0.025) and higher lowest-F0 (p=0.035). Finally the obese group showed significantly lower shimmer values than the normal weight subjects (p<0.05). Body weight and body fat volume appear to influence select objective measures of voice quality, vocal aerodynamics, and phonatory range performance.

  5. Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA. CEDR Working Paper. WP #2016-2.2

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Cowan, James; Theobald, Roddy

    2016-01-01

    We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…

  6. Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA. CEDR Working Paper. WP #2016-7

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Cowan, James; Theobald, Roddy

    2016-01-01

    We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…

  7. Application of Athletic Movement Tests that Predict Injury Risk in a Military Population: Development of Normative Data.

    PubMed

    Teyhen, Deydre S; Shaffer, Scott W; Butler, Robert J; Goffar, Stephen L; Kiesel, Kyle B; Rhon, Daniel I; Boyles, Robert E; McMillian, Daniel J; Williamson, Jared N; Plisky, Phillip J

    2016-10-01

    Performance on movement tests helps to predict injury risk in a variety of physically active populations. Understanding baseline measures for normal is an important first step. Determine differences in physical performance assessments and describe normative values for these tests based on military unit type. Assessment of power, balance, mobility, motor control, and performance on the Army Physical Fitness Test were assessed in a cohort of 1,466 soldiers. Analysis of variance was performed to compare the results based on military unit type (Rangers, Combat, Combat Service, and Combat Service Support) and analysis of covariance was performed to determine the influence of age and gender. Rangers performed the best on all performance and fitness measures (p < 0.05). Combat soldiers performed better than Combat Service and Service Support soldiers on several physical performance tests and the Army Physical Fitness Test (p < 0.05). Performance in Combat Service and Service Support soldiers was equivalent on most measures (p < 0.05). Functional performance and level of fitness varied significantly by military unit type. Understanding these differences will provide a foundation for future injury prediction and prevention strategies. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  8. Noninvasive in vivo glucose sensing using an iris based technique

    NASA Astrophysics Data System (ADS)

    Webb, Anthony J.; Cameron, Brent D.

    2011-03-01

    Physiological glucose monitoring is important aspect in the treatment of individuals afflicted with diabetes mellitus. Although invasive techniques for glucose monitoring are widely available, it would be very beneficial to make such measurements in a noninvasive manner. In this study, a New Zealand White (NZW) rabbit animal model was utilized to evaluate a developed iris-based imaging technique for the in vivo measurement of physiological glucose concentration. The animals were anesthetized with isoflurane and an insulin/dextrose protocol was used to control blood glucose concentration. To further help restrict eye movement, a developed ocular fixation device was used. During the experimental time frame, near infrared illuminated iris images were acquired along with corresponding discrete blood glucose measurements taken with a handheld glucometer. Calibration was performed using an image based Partial Least Squares (PLS) technique. Independent validation was also performed to assess model performance along with Clarke Error Grid Analysis (CEGA). Initial validation results were promising and show that a high percentage of the predicted glucose concentrations are within 20% of the reference values.

  9. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  10. Effectiveness of Spectral Similarity Measures to Develop Precise Crop Spectra for Hyperspectral Data Analysis

    NASA Astrophysics Data System (ADS)

    Chauhan, H.; Krishna Mohan, B.

    2014-11-01

    The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.

  11. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  12. Comparison of Self-Report Versus Sensor-Based Methods for Measuring the Amount of Upper Limb Activity Outside the Clinic.

    PubMed

    Waddell, Kimberly J; Lang, Catherine E

    2018-03-10

    To compare self-reported with sensor-measured upper limb (UL) performance in daily life for individuals with chronic (≥6mo) UL paresis poststroke. Secondary analysis of participants enrolled in a phase II randomized, parallel, dose-response UL movement trial. This analysis compared the accuracy and consistency between self-reported UL performance and sensor-measured UL performance at baseline and immediately post an 8-week intensive UL task-specific intervention. Outpatient rehabilitation. Community-dwelling individuals with chronic (≥6mo) UL paresis poststroke (N=64). Not applicable. Motor Activity Log amount of use scale and the sensor-derived use ratio from wrist-worn accelerometers. There was a high degree of variability between self-reported UL performance and the sensor-derived use ratio. Using sensor-based values as a reference, 3 distinct categories were identified: accurate reporters (reporting difference ±0.1), overreporters (difference >0.1), and underreporters (difference <-0.1). Five of 64 participants accurately self-reported UL performance at baseline and postintervention. Over half of participants (52%) switched categories from pre-to postintervention (eg, moved from underreporting preintervention to overreporting postintervention). For the consistent reporters, no participant characteristics were found to influence whether someone over- or underreported performance compared with sensor-based assessment. Participants did not consistently or accurately self-report UL performance when compared with the sensor-derived use ratio. Although self-report and sensor-based assessments are moderately associated and appear similar conceptually, these results suggest self-reported UL performance is often not consistent with sensor-measured performance and the measures cannot be used interchangeably. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. The incremental value of self-reported mental health measures in predicting functional outcomes of veterans.

    PubMed

    Eisen, Susan V; Bottonari, Kathryn A; Glickman, Mark E; Spiro, Avron; Schultz, Mark R; Herz, Lawrence; Rosenheck, Robert; Rofman, Ethan S

    2011-04-01

    Research on patient-centered care supports use of patient/consumer self-report measures in monitoring health outcomes. This study examined the incremental value of self-report mental health measures relative to a clinician-rated measure in predicting functional outcomes among mental health service recipients. Participants (n = 446) completed the Behavior and Symptom Identification Scale, the Brief Symptom Inventory, and the Veterans/Rand Short Form-36 at enrollment in the study (T1) and 3 months later (T2). Global Assessment of Functioning (GAF) ratings, mental health service utilization, and psychiatric diagnoses were obtained from administrative data files. Controlling for demographic and clinical variables, results indicated that improvement based on the self-report measures significantly predicted one or more functional outcomes (i.e., decreased likelihood of post-enrollment psychiatric hospitalization and increased likelihood of paid employment), above and beyond the predictive value of the GAF. Inclusion of self-report measures may be a useful addition to performance measurement efforts.

  14. Photovoltaic system derived data for determining the solar resource and for modeling the performance of other photovoltaic systems

    DOE PAGES

    Marion, Bill; Smith, Benjamin

    2017-03-27

    Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. Here, a method was developed to back-solve for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the measured ac power of south-facing PV module/micro-inverter systems. The method was validated using one year of irradiance and PV performance measurements for five PV systems, each with a different tilt/azimuth orientation, and located in Golden, Colorado. Compared to using a measured global horizontal irradiance for PV performance model input,more » using the back-solved values of DNI and DHI only increased the range of mean bias deviations from measured values by 0.6% for the modeled annual averages of the global tilt irradiance and ac power for the five PV systems. Correcting for angle-of-incidence effects is an important feature of the method to prevent underestimating the solar resource and for modeling the performance of PV systems with more dissimilar PV module orientations. The results for the method were also shown more favorable than the results when using an existing power projection method for estimating the ac power.« less

  15. Photovoltaic system derived data for determining the solar resource and for modeling the performance of other photovoltaic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marion, Bill; Smith, Benjamin

    Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. Here, a method was developed to back-solve for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the measured ac power of south-facing PV module/micro-inverter systems. The method was validated using one year of irradiance and PV performance measurements for five PV systems, each with a different tilt/azimuth orientation, and located in Golden, Colorado. Compared to using a measured global horizontal irradiance for PV performance model input,more » using the back-solved values of DNI and DHI only increased the range of mean bias deviations from measured values by 0.6% for the modeled annual averages of the global tilt irradiance and ac power for the five PV systems. Correcting for angle-of-incidence effects is an important feature of the method to prevent underestimating the solar resource and for modeling the performance of PV systems with more dissimilar PV module orientations. The results for the method were also shown more favorable than the results when using an existing power projection method for estimating the ac power.« less

  16. Seeking excellence: An evaluation of 235 international laboratories conducting water isotope analyses by isotope-ratio and laser-absorption spectrometry.

    PubMed

    Wassenaar, L I; Terzer-Wassmuth, S; Douence, C; Araguas-Araguas, L; Aggarwal, P K; Coplen, T B

    2018-03-15

    Water stable isotope ratios (δ 2 H and δ 18 O values) are widely used tracers in environmental studies; hence, accurate and precise assays are required for providing sound scientific information. We tested the analytical performance of 235 international laboratories conducting water isotope analyses using dual-inlet and continuous-flow isotope ratio mass spectrometers and laser spectrometers through a water isotope inter-comparison test. Eight test water samples were distributed by the IAEA to international stable isotope laboratories. These consisted of a core set of five samples spanning the common δ-range of natural waters, and three optional samples (highly depleted, enriched, and saline). The fifth core sample contained unrevealed trace methanol to assess analyst vigilance to the impact of organic contamination on water isotopic measurements made by all instrument technologies. For the core and optional samples ~73 % of laboratories gave acceptable results within 0.2 ‰ and 1.5 ‰ of the reference values for δ 18 O and δ 2 H, respectively; ~27 % produced unacceptable results. Top performance for δ 18 O values was dominated by dual-inlet IRMS laboratories; top performance for δ 2 H values was led by laser spectrometer laboratories. Continuous-flow instruments yielded comparatively intermediate results. Trace methanol contamination of water resulted in extreme outlier δ-values for laser instruments, but also affected reactor-based continuous-flow IRMS systems; however, dual-inlet IRMS δ-values were unaffected. Analysis of the laboratory results and their metadata suggested inaccurate or imprecise performance stemmed mainly from skill- and knowledge-based errors including: calculation mistakes, inappropriate or compromised laboratory calibration standards, poorly performing instrumentation, lack of vigilance to contamination, or inattention to unreasonable isotopic outcomes. To counteract common errors, we recommend that laboratories include 1-2 'known' control standards in all autoruns; laser laboratories should screen each autorun for spectral contamination; and all laboratories should evaluate whether derived d-excess values are realistic when both isotope ratios are measured. Combined, these data evaluation strategies should immediately inform the laboratory about fundamental mistakes or compromised samples. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Seeking excellence: An evaluation of 235 international laboratories conducting water isotope analyses by isotope-ratio and laser-absorption spectrometry

    USGS Publications Warehouse

    Wassenaar, L. I.; Terzer-Wassmuth, S.; Douence, C.; Araguas-Araguas, L.; Aggarwal, P. K.; Coplen, Tyler B.

    2018-01-01

    RationaleWater stable isotope ratios (δ2H and δ18O values) are widely used tracers in environmental studies; hence, accurate and precise assays are required for providing sound scientific information. We tested the analytical performance of 235 international laboratories conducting water isotope analyses using dual-inlet and continuous-flow isotope ratio mass spectrometers and laser spectrometers through a water isotope inter-comparison test.MethodsEight test water samples were distributed by the IAEA to international stable isotope laboratories. These consisted of a core set of five samples spanning the common δ-range of natural waters, and three optional samples (highly depleted, enriched, and saline). The fifth core sample contained unrevealed trace methanol to assess analyst vigilance to the impact of organic contamination on water isotopic measurements made by all instrument technologies.ResultsFor the core and optional samples ~73 % of laboratories gave acceptable results within 0.2 ‰ and 1.5 ‰ of the reference values for δ18O and δ2H, respectively; ~27 % produced unacceptable results. Top performance for δ18O values was dominated by dual-inlet IRMS laboratories; top performance for δ2H values was led by laser spectrometer laboratories. Continuous-flow instruments yielded comparatively intermediate results. Trace methanol contamination of water resulted in extreme outlier δ-values for laser instruments, but also affected reactor-based continuous-flow IRMS systems; however, dual-inlet IRMS δ-values were unaffected.ConclusionsAnalysis of the laboratory results and their metadata suggested inaccurate or imprecise performance stemmed mainly from skill- and knowledge-based errors including: calculation mistakes, inappropriate or compromised laboratory calibration standards, poorly performing instrumentation, lack of vigilance to contamination, or inattention to unreasonable isotopic outcomes. To counteract common errors, we recommend that laboratories include 1–2 'known' control standards in all autoruns; laser laboratories should screen each autorun for spectral contamination; and all laboratories should evaluate whether derived d-excess values are realistic when both isotope ratios are measured. Combined, these data evaluation strategies should immediately inform the laboratory about fundamental mistakes or compromised samples.

  18. A comparison of ground-based hydroxyl airglow temperatures with SABER/TIMED measurements over 23° N, India

    NASA Astrophysics Data System (ADS)

    Parihar, Navin; Singh, Dupinder; Gurubaran, Subramanian

    2017-03-01

    Ground-based observations of OH (6, 2) Meinel band nightglow were carried out at Ranchi (23.3° N, 85.3° E), India, during January-March 2011, December 2011-May 2012 and December 2012-March 2013 using an all-sky imaging system. Near the mesopause, OH temperatures were derived from the OH (6, 2) Meinel band intensity information. A limited comparison of OH temperatures (TOH) with SABER/TIMED measurements in 30 cases was performed by defining almost coincident criterion of ±1.5° latitude-longitude and ±3 min of the ground-based observations. Using SABER OH 1.6 and 2.0 µm volume emission rate profiles as the weighing function, two sets of OH-equivalent temperature (T1. 6 and T2. 0 respectively) were estimated from its kinetic temperature profile for comparison with OH nightglow measurements. Overall, fair agreement existed between ground-based and SABER measurements in the majority of events within the limits of experimental errors. Overall, the mean value of OH-derived temperatures and SABER OH-equivalent temperatures were 197.3 ± 4.6, 192.0 ± 10.8 and 192.7 ± 10.3 K, and the ground-based temperatures were 4-5 K warmer than SABER values. A difference of 8 K or more is noted between two measurements when the peak of the OH emission layer lies in the vicinity of large temperature inversions. A comparison of OH temperatures derived using different sets of Einstein transition probabilities and SABER measurements was also performed; however, OH temperatures derived using Langhoff et al. (1986) transition probabilities were found to compare well.

  19. Content dependent selection of image enhancement parameters for mobile displays

    NASA Astrophysics Data System (ADS)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  20. Recommendations for newborn screening for galactokinase deficiency: A systematic review and evaluation of Dutch newborn screening data.

    PubMed

    Stroek, Kevin; Bouva, Marelle J; Schielen, Peter C J I; Vaz, Frédéric M; Heijboer, Annemieke C; de Jonge, Robert; Boelen, Anita; Bosch, Annet M

    2018-03-21

    Galactokinase (GALK) deficiency causes cataract leading to severe developmental consequences unless treated early. Because of the easy prevention and rapid reversibility of cataract with treatment, the Dutch Health Council advised to include GALK deficiency in the Dutch newborn screening program. The aim of this study is to establish the optimal screening method and cut-off value (COV) for GALK deficiency screening by performing a systematic review of the literature of screening strategies and total galactose (TGAL) values and by evaluating TGAL values in the first week of life in a cohort of screened newborns in the Netherlands. Systematic literature search strategies in OVID MEDLINE and OVID EMBASE were developed and study selection, data collection and analyses were performed by two independent investigators. A range of TGAL values measured by the Quantase Neonatal Total Galactose screening assay in a cohort of Dutch newborns in 2007 was evaluated. Eight publications were included in the systematic review. All four studies describing screening strategies used TGAL as the primary screening marker combined with galactose-1-phosphate uridyltransferase (GALT) measurement that is used for classical galactosemia screening. TGAL COVs of 2200 μmol/L, 1665 μmol/L and 1110 μmol/L blood resulted in positive predictive values (PPV) of 100%, 82% and 10% respectively. TGAL values measured in the newborn period were reported for 39 GALK deficiency patients with individual values ranging from 3963 to 8159 μmol/L blood and 2 group values with mean 8892 μmol/L blood (SD ± 5243) and 4856 μmol/L blood (SD ± 461). Dutch newborn screening data of 72,786 newborns from 2007 provided a median TGAL value of 110 μmol/L blood with a range of 30-2431 μmol/L blood. Based on TGAL values measured in GALK deficiency patients reported in the literature and TGAL measurements in the Dutch cohort by newborn screening we suggest to perform the GALK screening with TGAL as a primary marker with a COV of 2500 μmol/L blood, combined with GALT enzyme activity measurement as used in the classical galactosemia screening, to ensure detection of GALK deficiency patients and minimize false positive referrals. Copyright © 2018. Published by Elsevier Inc.

  1. Experimental radiative lifetimes, branching fractions, and oscillator strengths of some levels in Tm III

    NASA Astrophysics Data System (ADS)

    Yu, Qi; Wang, Xinghao; Li, Qiu; Gong, Yimin; Dai, Zhenwen

    2018-06-01

    Natural radiative lifetimes for five even-parity levels of Tm III were measured by time-resolved laser-induced fluorescence method. The branching fraction measurements were performed based on the emission spectra of a hollow cathode lamp. By combining the measured branching fractions and the lifetime values reported in this work and in literature, experimental transition probabilities and oscillator strengths for 11 transitions were derived for the first time.

  2. Measurement-to-measurement blood pressure variability is related to cognitive performance: the Maine Syracuse study.

    PubMed

    Crichton, Georgina E; Elias, Merrill F; Dore, Gregory A; Torres, Rachael V; Robbins, Michael A

    2014-11-01

    The objective was to investigate the association between variability in blood pressure (BP) and cognitive function for sitting, standing, and reclining BP values and variability derived from all 15 measures. In previous studies, only sitting BP values have been examined, and only a few cognitive measures have been used. A secondary objective was to examine associations between BP variability and cognitive performance in hypertensive individuals stratified by treatment success. Cross-sectional analyses were performed on 972 participants of the Maine Syracuse Study for whom 15 serial BP clinic measures (5 sitting, 5 recumbent, and 5 standing) were obtained before testing of cognitive performance. Using all 15 measures, higher variability in systolic and diastolic BP was associated with poorer performance on multiple measures of cognitive performance, independent of demographic factors, cardiovascular risk factors, and pulse pressure. When sitting, reclining, and standing systolic BP values were compared, only variability in standing BP was related to measures of cognitive performance. However, for diastolic BP, variability in all 3 positions was related to cognitive performance. Mean BP values were weaker predictors of cognition. Furthermore, higher overall variability in both systolic and diastolic BP was associated with poorer cognitive performance in unsuccessfully treated hypertensive individuals (with BP ≥140/90 mm Hg), but these associations were not evident in those with controlled hypertension. © 2014 American Heart Association, Inc.

  3. Comparison of self-report-based and physical performance-based frailty definitions among patients receiving maintenance hemodialysis.

    PubMed

    Johansen, Kirsten L; Dalrymple, Lorien S; Delgado, Cynthia; Kaysen, George A; Kornak, John; Grimes, Barbara; Chertow, Glenn M

    2014-10-01

    A well-accepted definition of frailty includes measurements of physical performance, which may limit its clinical utility. In a cross-sectional study, we compared prevalence and patient characteristics based on a frailty definition that uses self-reported function to the classic performance-based definition and developed a modified self-report-based definition. Prevalent adult patients receiving hemodialysis in 14 centers around San Francisco and Atlanta in 2009-2011. Self-report-based frailty definition in which a score lower than 75 on the Physical Function scale of the 36-Item Short Form Health Survey (SF-36) was substituted for gait speed and grip strength in the classic definition; modified self-report definition with optimized Physical Function score cutoff points derived in a development (one-half) cohort and validated in the other half. Performance-based frailty defined as 3 of the following: weight loss, weakness, exhaustion, low physical activity, and slow gait speed. 387 (53%) patients were frail based on self-reported function, of whom 209 (29% of the cohort) met the performance-based definition. Only 23 (3%) met the performance-based definition of frailty only. The self-report definition had 90% sensitivity, 64% specificity, 54% positive predictive value, 93% negative predictive value, and 72.5% overall accuracy. Intracellular water per kilogram of body weight and serum albumin, prealbumin, and creatinine levels were highest among nonfrail individuals, intermediate among those who were frail by self-report, and lowest among those who also were frail by performance. Age, percentage of body fat, and C-reactive protein level followed an opposite pattern. The modified self-report definition had better accuracy (84%; 95% CI, 79%-89%) and superior specificity (88%) and positive predictive value (67%). Our study did not address prediction of outcomes. Patients who meet the self-report-based but not the performance-based definition of frailty may represent an intermediate phenotype. A modified self-report definition can improve the accuracy of a questionnaire-based method of defining frailty. Published by Elsevier Inc.

  4. VCSEL-based fiber optic link for avionics: implementation and performance analyses

    NASA Astrophysics Data System (ADS)

    Shi, Jieqin; Zhang, Chunxi; Duan, Jingyuan; Wen, Huaitao

    2006-11-01

    A Gb/s fiber optic link with built-in test capability (BIT) basing on vertical-cavity surface-emitting laser (VCSEL) sources for military avionics bus for next generation has been presented in this paper. To accurately predict link performance, statistical methods and Bit Error Rate (BER) measurements have been examined. The results show that the 1Gb/s fiber optic link meets the BER requirement and values for link margin can reach up to 13dB. Analysis shows that the suggested photonic network may provide high performance and low cost interconnections alternative for future military avionics.

  5. Diagnosis of helicopter gearboxes using structure-based networks

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Danai, Kourosh; Lewicki, David G.

    1995-01-01

    A connectionist network is introduced for fault diagnosis of helicopter gearboxes that incorporates knowledge of the gearbox structure and characteristics of the vibration features as its fuzzy weights. Diagnosis is performed by propagating the abnormal features of vibration measurements through this Structure-Based Connectionist Network (SBCN), the outputs of which represent the fault possibility values for individual components of the gearbox. The performance of this network is evaluated by applying it to experimental vibration data from an OH-58A helicopter gearbox. The diagnostic results indicate that the network performance is comparable to those obtained from supervised pattern classification.

  6. Sex estimation based on tooth measurements using panoramic radiographs.

    PubMed

    Capitaneanu, Cezar; Willems, Guy; Jacobs, Reinhilde; Fieuws, Steffen; Thevissen, Patrick

    2017-05-01

    Sex determination is an important step in establishing the biological profile of unidentified human remains. The aims of the study were, firstly, to assess the degree of sexual dimorphism in permanent teeth, based on digital tooth measurements performed on panoramic radiographs. Secondly, to identify sex-related tooth position-specific measurements or combinations of such measurements, and to assess their applicability for potential sex determination. Two hundred digital panoramic radiographs (100 males, 100 females; age range 22-34 years) were retrospectively collected from the dental clinic files of the Dentomaxillofacial Radiology Center of the University Hospitals Leuven, Belgium, and imported in image enhancement software. Tooth length- and width-related variables were measured on all teeth in upper and lower left quadrant, and ratios of variables were calculated. Univariate and multivariate analyses were performed to quantify the sex discriminative value of the tooth position-specific variables and their combinations. The mandibular and maxillary canine showed the greatest sexual dimorphism, and tooth length variables had the highest discriminative potential. Compared to single variables, combining variables or ratios of variables did not improve substantially the discrimination between males and females. Considering that the discriminative ability values (area under the curve (AUC)) were not higher than 0.80, it is not advocated to use the currently studied dental variables for accurate sex estimation in forensic practice.

  7. Land surface temperature measurements from EOS MODIS data

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming

    1995-01-01

    A significant progress has been made in TIR instrumentation which is required to establish the spectral BRDF/emissivity knowledge base of land-surface materials and to validate the land-surface temperature (LST) algorithms. The SIBRE (spectral Infrared Bidirectional Reflectance and Emissivity) system and a TIR system for measuring spectral directional-hemispherical emissivity have been completed and tested successfully. Optical properties and performance features of key components (including spectrometer, and TIR source) of these systems have been characterized by integrated use of local standards (blackbody and reference plates). The stabilization of the spectrometer performance was improved by a custom designed and built liquid cooling system. Methods and procedures for measuring spectral TIR BRDF and directional-hemispheric emissivity with these two systems have been verified in sample measurements. These TIR instruments have been used in the laboratory and the field, giving very promising results. The measured spectral emissivities of water surface are very close to the calculated values based on well established water refractive index values in published papers. Preliminary results show that the TIR instruments can be used for validation of the MODIS LST algorithm in homogeneous test sites. The beta-3 version of the MODIS LST software is being prepared for its delivery scheduled in the early second half of this year.

  8. Measurement and research on the appearance of tongue board based on modification to discuss centrifugal fan air performance

    NASA Astrophysics Data System (ADS)

    Jwo, Ching-Song; Cheng, Tseng-Tang; Cho, Hung-Pin; Chiang, Wei-Tang; Chen, Sih-Li; Chen, Chien-Wei; Jian, Ling-You

    2011-12-01

    This paper presents a reduced fan noise method, with increased fan-benefit analysis of various performances. The experimental approach adopts changes in the outlet in the form of two fans (flat tongue and a V-Type tongue plate) in order to measure the noise under the two forms of value and volume of supply air fan, shaft power consumption, operating current, and static pressure. The results showed that the tongue plate and the V-plane tongue plate noise between the value of the measurement location of 6.7 in the tongue plate in the plane below the noise level is about V-tongue plate 1 ~ 1.5dB (A). Air flow rate testing showed that the flat plate and the V-Type tongue plate between the tongue plate V-Type flow rate value, the measurement location of 3.4 in the tongue plate in the plane was more than the V-Type flow rate tongue plate 5 to 5.5%. Shaft power testing of measurement model 3, and measurement model 4, showed that the tongue plate in the plane V-tongue plate was more than 8%, 5%. The measurement models 3 and 4 and 5 showed more than the V-Type plane tongue plate 1%, 2.7%, and 2.6%. The measurement models 6 and 8 showed that, the flat tongue plate is less than the V-tongue plate of 2.9% and 2.3%. Static pressure testing showed that the flat tongue plate in particular measurement models (3,4,8,9), the static value of V-tongue plate than the 11.1% higher, respectively, 9%, 4.3%, and 3.7%. The results summarized above suggest that, in the specific measurement points, when parallel to the tongue plate the V-tongue board has better performance.

  9. Arts Teacher Evaluation: How Did We Get Here?

    ERIC Educational Resources Information Center

    Shaw, Ryan D.

    2016-01-01

    Arts educators recently have found themselves in unfamiliar and sometimes-frustrating situations regarding their performance evaluations. The push for value-added models, effectiveness ratings based on schoolwide test score averages, portfolios, student learning objectives, and other measures of teacher effectiveness has dramatically changed arts…

  10. Measurement and comparison of the optical performance of an ophthalmic lens based on a Hartmann-Shack wavefront sensor in real viewing conditions.

    PubMed

    Zhou, Chuanqing; Wang, Weichao; Yang, Kun; Chai, Xinyu; Ren, Qiushi

    2008-12-01

    The spatially resolved wavefront aberrations of four types of ophthalmic lens are measured with a custom-built apparatus based on a Hartmann-Shack wavefront sensor and specially designed positioning stage. The wavefront aberrations of the progressive addition lenses (PALs) are compared. The results show that the distribution depends much on the design philosophy, although the average values of root mean square in the entire measurement areas have no significant difference. It is feasible to evaluate the optical performance through the wavefront analysis of PALs, but how to meet the customized visual needs of patients and how to minimize the unwanted aberrations in some special zones are important points that should be taken into account.

  11. Link-Based Similarity Measures Using Reachability Vectors

    PubMed Central

    Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin

    2014-01-01

    We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188

  12. Analysis of historical delta values for IAEA/LANL NDA training courses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William; Santi, Peter; Swinhoe, Martyn

    2009-01-01

    The Los Alamos National Laboratory (LANL) supports the International Atomic Energy Agency (IAEA) by providing training for IAEA inspectors in neutron and gamma-ray Nondestructive Assay (NDA) of nuclear material. Since 1980, all new IAEA inspectors attend this two week course at LANL gaining hands-on experience in the application of NDA techniques, procedures and analysis to measure plutonium and uranium nuclear material standards with well known pedigrees. As part of the course the inspectors conduct an inventory verification exercise. This exercise provides inspectors the opportunity to test their abilities in performing verification measurements using the various NDA techniques. For an inspector,more » the verification of an item is nominally based on whether the measured assay value agrees with the declared value to within three times the historical delta value. The historical delta value represents the average difference between measured and declared values from previous measurements taken on similar material with the same measurement technology. If the measurement falls outside a limit of three times the historical delta value, the declaration is not verified. This paper uses measurement data from five years of IAEA courses to calculate a historical delta for five non-destructive assay methods: Gamma-ray Enrichment, Gamma-ray Plutonium Isotopics, Passive Neutron Coincidence Counting, Active Neutron Coincidence Counting and the Neutron Coincidence Collar. These historical deltas provide information as to the precision and accuracy of these measurement techniques under realistic conditions.« less

  13. Improved electronic measurement of the Boltzmann constant by Johnson noise thermometry

    NASA Astrophysics Data System (ADS)

    Qu, Jifeng; Benz, Samuel P.; Pollarolo, Alessio; Rogalla, Horst; Tew, Weston L.; White, Rod; Zhou, Kunli

    2015-10-01

    The unit of thermodynamic temperature, the kelvin, will be redefined in 2018 by fixing the value of the Boltzmann constant, k. The present CODATA recommended value of k is determined predominantly by acoustic gas-thermometry results. To provide a value of k based on different physical principles, purely electronic measurements of k were performed by using a Johnson noise thermometer to compare the thermal noise power of a 200  Ω sensing resistor immersed in a triple-point-of-water cell to the noise power of a quantum-accurate pseudo-random noise waveform of nominally equal noise power. Measurements integrated over a bandwidth of 575 kHz and a total integration time of about 33 d gave a measured value of k = 1.3806513(53)  ×  10-23 J K-1, for which the relative standard uncertainty is 3.9   ×   10-6 and the relative offset from the CODATA 2010 value is +1.8   ×   10-6.

  14. web-based interactive data processing: application to stable isotope metrology.

    PubMed

    Verkouteren, R M; Lee, J N

    2001-08-01

    To address a fundamental need in stable isotope metrology, the National Institute of Standards and Technology (NIST) has established a web-based interactive data-processing system accessible through a common gateway interface (CGI) program on the internet site http://www. nist.gov/widps-co2. This is the first application of a web-based tool that improves the measurement traceability afforded by a series of NIST standard materials. Specifically, this tool promotes the proper usage of isotope reference materials (RMs) and improves the quality of reported data from extensive measurement networks. Through the International Atomic Energy Agency (IAEA), we have defined standard procedures for stable isotope measurement and data-processing, and have determined and applied consistent reference values for selected NIST and IAEA isotope RMs. Measurement data of samples and RMs are entered into specified fields on the web-based form. These data are submitted through the CGI program on a NIST Web server, where appropriate calculations are performed and results returned to the client. Several international laboratories have independently verified the accuracy of the procedures and algorithm for measurements of naturally occurring carbon-13 and oxygen-18 abundances and slightly enriched compositions up to approximately 150% relative to natural abundances. To conserve the use of the NIST RMs, users may determine value assignments for a secondary standard to be used in routine analysis. Users may also wish to validate proprietary algorithms embedded in their laboratory instrumentation, or specify the values of fundamental variables that are usually fixed in reduction algorithms to see the effect on the calculations. The results returned from the web-based tool are limited in quality only by the measurements themselves, and further value may be realized through the normalization function. When combined with stringent measurement protocols, two- to threefold improvements have been realized in the reproducibility of carbon-13 and oxygen-18 determinations across laboratories.

  15. Development of performance measures based on visibility for effective placement of aids to navigation

    NASA Astrophysics Data System (ADS)

    Fang, Tae Hyun; Kim, Yeon-Gyu; Gong, In-Young; Park, Sekil; Kim, Ah-Young

    2015-09-01

    In order to develop the challenging process of placing Aids to Navigation (AtoN), we propose performance measures which quantifies the effect of such placement. The best placement of AtoNs is that from which the navigator can best recognize the information provided by an AtoN. The visibility of AtoNs depends mostly on light sources, the weather condition and the position of the navigator. Visual recognition is enabled by achieving adequate contrast between the AtoN light source and background light. Therefore, the performance measures can be formulated through the amount of differences between these two lights. For simplification, this approach is based on the values of the human factor suggested by International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA). Performance measures for AtoN placement can be evaluated through AtoN Simulator, which has been being developed by KIOST/KRISO in Korea and has been launched by Korea National Research Program. Simulations for evaluation are carried out at waterway in Busan port in Korea.

  16. Quality of service routing in the differentiated services framework

    NASA Astrophysics Data System (ADS)

    Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo

    2001-02-01

    In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.

  17. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  18. Accuracy of the raw-data-based effective atomic numbers and monochromatic CT numbers for contrast medium with a dual-energy CT technique.

    PubMed

    Kawahara, Daisuke; Ozawa, Shuichi; Yokomachi, Kazushi; Tanaka, Sodai; Higaki, Toru; Fujioka, Chikako; Suzuki, Tatsuhiko; Tsuneda, Masato; Nakashima, Takeo; Ohno, Yoshimi; Nagata, Yasushi

    2018-02-01

    To evaluate the accuracy of raw-data-based effective atomic number (Z eff ) values and monochromatic CT numbers for contrast material of varying iodine concentrations, obtained using dual-energy CT. We used a tissue characterization phantom and varying concentrations of iodinated contrast medium. A comparison between the theoretical values of Z eff and that provided by the manufacturer was performed. The measured and theoretical monochromatic CT numbers at 40-130 keV were compared. The average difference between the Z eff values of lung (inhale) inserts in the tissue characterization phantom was 81.3% and the average Z eff difference was within 8.4%. The average difference between the Z eff values of the varying concentrations of iodinated contrast medium was within 11.2%. For the varying concentrations of iodinated contrast medium, the differences between the measured and theoretical monochromatic CT values increased with decreasing monochromatic energy. The Z eff and monochromatic CT numbers in the tissue characterization phantom were reasonably accurate. The accuracy of the raw-data-based Z eff values was higher than that of image-based Z eff values in the tissue-equivalent phantom. The accuracy of Z eff values in the contrast medium was in good agreement within the maximum SD found in the iodine concentration range of clinical dynamic CT imaging. Moreover, the optimum monochromatic energy for human tissue and iodinated contrast medium was found to be 70 keV. Advances in knowledge: The accuracy of the Z eff values and monochromatic CT numbers of the contrast medium created by raw-data-based, dual-energy CT could be sufficient in clinical conditions.

  19. Diagnostic features of quantitative comb-push shear elastography for breast lesion differentiation

    PubMed Central

    Denis, Max; Gregory, Adriana; Mehrmohammadi, Mohammad; Kumar, Viksit; Meixner, Duane; Fazzio, Robert T.; Fatemi, Mostafa

    2017-01-01

    Background Lesion stiffness measured by shear wave elastography has shown to effectively separate benign from malignant breast masses. The aim of this study was to evaluate different aspects of Comb-push Ultrasound Shear Elastography (CUSE) performance in differentiating breast masses. Methods With written signed informed consent, this HIPAA- compliant, IRB approved prospective study included patients from April 2014 through August 2016 with breast masses identified on conventional imaging. Data from 223 patients (19–85 years, mean 59.93±14.96 years) with 227 suspicious breast masses identifiable by ultrasound (mean size 1.83±2.45cm) were analyzed. CUSE was performed on all patients. Three regions of interest (ROI), 3 mm in diameter each, were selected inside the lesion on the B-mode ultrasound which also appeared in the corresponding shear wave map. Lesion elasticity values were measured in terms of the Young’s modulus. In correlation to pathology results, statistical analyses were performed. Results Pathology revealed 108 lesions as malignant and 115 lesions as benign. Additionally, 4 lesions (BI-RADS 2 and 3) were considered benign and were not biopsied. Average lesion stiffness measured by CUSE resulted in 84.26% sensitivity (91 of 108), 89.92% specificity (107 of 119), 85.6% positive predictive value, 89% negative predictive value and 0.91 area under the curve (P<0.0001). Stiffness maps showed spatial continuity such that maximum and average elasticity did not have significantly different results (P > 0.21). Conclusion CUSE was able to distinguish between benign and malignant breast masses with high sensitivity and specificity. Continuity of stiffness maps allowed for choosing multiple quantification ROIs which covered large areas of lesions and resulted in similar diagnostic performance based on average and maximum elasticity. The overall results of this study, highlights the clinical value of CUSE in differentiation of breast masses based on their stiffness. PMID:28257467

  20. Diagnostic features of quantitative comb-push shear elastography for breast lesion differentiation.

    PubMed

    Bayat, Mahdi; Denis, Max; Gregory, Adriana; Mehrmohammadi, Mohammad; Kumar, Viksit; Meixner, Duane; Fazzio, Robert T; Fatemi, Mostafa; Alizad, Azra

    2017-01-01

    Lesion stiffness measured by shear wave elastography has shown to effectively separate benign from malignant breast masses. The aim of this study was to evaluate different aspects of Comb-push Ultrasound Shear Elastography (CUSE) performance in differentiating breast masses. With written signed informed consent, this HIPAA- compliant, IRB approved prospective study included patients from April 2014 through August 2016 with breast masses identified on conventional imaging. Data from 223 patients (19-85 years, mean 59.93±14.96 years) with 227 suspicious breast masses identifiable by ultrasound (mean size 1.83±2.45cm) were analyzed. CUSE was performed on all patients. Three regions of interest (ROI), 3 mm in diameter each, were selected inside the lesion on the B-mode ultrasound which also appeared in the corresponding shear wave map. Lesion elasticity values were measured in terms of the Young's modulus. In correlation to pathology results, statistical analyses were performed. Pathology revealed 108 lesions as malignant and 115 lesions as benign. Additionally, 4 lesions (BI-RADS 2 and 3) were considered benign and were not biopsied. Average lesion stiffness measured by CUSE resulted in 84.26% sensitivity (91 of 108), 89.92% specificity (107 of 119), 85.6% positive predictive value, 89% negative predictive value and 0.91 area under the curve (P<0.0001). Stiffness maps showed spatial continuity such that maximum and average elasticity did not have significantly different results (P > 0.21). CUSE was able to distinguish between benign and malignant breast masses with high sensitivity and specificity. Continuity of stiffness maps allowed for choosing multiple quantification ROIs which covered large areas of lesions and resulted in similar diagnostic performance based on average and maximum elasticity. The overall results of this study, highlights the clinical value of CUSE in differentiation of breast masses based on their stiffness.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batcheller, Thomas Aquinas; Taylor, Dean Dalton

    Idaho Nuclear Technology and Engineering Center 300,000-gallon vessel WM-189 was filled in late 2001 with concentrated sodium bearing waste (SBW). Three airlifted liquid samples and a steam jetted slurry sample were obtained for quantitative analysis and characterization of WM-189 liquid phase SBW and tank heel sludge. Estimates were provided for most of the reported data values, based on the greater of (a) analytical uncertainty, and (b) variation of analytical results between nominally similar samples. A consistency check on the data was performed by comparing the total mass of dissolved solids in the liquid, as measured gravimetrically from a dried sample,more » with the corresponding value obtained by summing the masses of cations and anions in the liquid, based on the reported analytical data. After reasonable adjustments to the nitrate and oxygen concentrations, satisfactory consistency between the two results was obtained. A similar consistency check was performed on the reported compositional data for sludge solids from the steam jetted sample. In addition to the compositional data, various other analyses were performed: particle size distribution was measured for the sludge solids, sludge settling tests were performed, and viscosity measurements were made. WM-189 characterization results were compared with those for WM-180, and other Tank Farm Facility tank characterization data. A 2-liter batch of WM-189 simulant was prepared and a clear, stable solution was obtained, based on a general procedure for mixing SBW simulant that was develop by Dr. Jerry Christian. This WM-189 SBW simulant is considered suitable for laboratory testing for process development.« less

  2. Performance analysis of Supply Chain Management with Supply Chain Operation reference model

    NASA Astrophysics Data System (ADS)

    Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi

    2018-04-01

    This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.

  3. Using Publicly Available Data to Construct a Transparent Measure of Health Care Value: A Method and Initial Results.

    PubMed

    Weeks, William B; Kotzbauer, Gregory R; Weinstein, James N

    2016-06-01

    Using publicly available Hospital Compare and Medicare data, we found a substantial range of hospital-level performance on quality, expenditure, and value measures for 4 common reasons for admission. Hospitals' ability to consistently deliver high-quality, low-cost care varied across the different reasons for admission. With the exception of coronary artery bypass grafting, hospitals that provided the highest-value care had more beds and a larger average daily census than those providing the lowest-value care. Transparent data like those we present can empower patients to compare hospital performance, make better-informed treatment decisions, and decide where to obtain care for particular health care problems. In the United States, the transition from volume to value dominates discussions of health care reform. While shared decision making might help patients determine whether to get care, transparency in procedure- and hospital-specific value measures would help them determine where to get care. Using Hospital Compare and Medicare expenditure data, we constructed a hospital-level measure of value from a numerator composed of quality-of-care measures (satisfaction, use of timely and effective care, and avoidance of harms) and a denominator composed of risk-adjusted 30-day episode-of-care expenditures for acute myocardial infarction (1,900 hospitals), coronary artery bypass grafting (884 hospitals), colectomy (1,252 hospitals), and hip replacement surgery (1,243 hospitals). We found substantial variation in aggregate measures of quality, cost, and value at the hospital level. Value calculation provided additional richness when compared to assessment based on quality or cost alone: about 50% of hospitals in an extreme quality- (and about 65% more in an extreme cost-) quintile were in the same extreme value quintile. With the exception of coronary artery bypass grafting, higher-value hospitals were larger and had a higher average daily census than lower-value hospitals, but were no more likely to be accredited by the Joint Commission or to have a residency program accredited by the American Council of Graduate Medical Education. While future efforts to compose value measures will certainly be modified and expanded to examine other reasons for admission, the construct that we present could allow patients to transparently compare procedure- and hospital-specific quality, spending, and value and empower them to decide where to obtain care. © 2016 Milbank Memorial Fund.

  4. Value-based formulas for purchasing. Does managed care offer value to society?

    PubMed

    Priester, R

    1997-01-01

    To assess whether managed care is, all things considered, a good investment for our society, we can measure its performance relative to five essential health care goals: promote efficiency; expand access; improve quality; preserve freedom of choice; and protect patient advocacy. These goals, which have shaped and continue to shape health care policy, define what is important to us in our health care system. Concerns about managed care's ability to advance these goals and thus to offer value are heightened if recently observed trends continue.

  5. Energy loss straggling in Aluminium foils for Li and C ions in fractional energy loss limits (ΔE/E) ∼10-60%

    NASA Astrophysics Data System (ADS)

    Diwan, P. K.; Kumar, Sunil; Kumar, Shyam; Sharma, V.; Khan, S. A.; Avasthi, D. K.

    2016-02-01

    The energy loss straggling of Li and C ions in Al foils of various thicknesses has been measured, within the fractional energy loss limit (∆E/E) ∼ 10-60%. These measurements have been performed using the 15UD Pelletron accelerator facility available at Inter University Accelerator Centre (IUAC), New Delhi, India. The measured straggling values have been compared with the corresponding predicted values adopting popularly used collisional straggling formulations viz Bohr, Lindhard and Scharff, Bethe-Livingston, Titeica. In addition, the experimental data has been compared to the Yang et al. empirical formula and Close Form Model, recently proposed by Montanari et al. The straggling values derived by Titeica theory were found to be in better agreement with the measured values as compared to other straggling formulations. The charge-exchange straggling component has been estimated from the measured data based on Titeica's theory. Finally, a function of the ion effective charge and the energy loss fraction within the target has been fitted to the latter straggling component.

  6. Performing a local barrier operation

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-03-04

    Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

  7. Performing a local barrier operation

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-03-04

    Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

  8. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  9. 3-D surface profilometry based on modulation measurement by applying wavelet transform method

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao

    2017-01-01

    A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.

  10. Energy and Timing Measurement with Time-Based Detector Readout for PET Applications: Principle and Validation with Discrete Circuit Components

    PubMed Central

    Sun, Xishan; Lan, Allan K.; Bircher, Chad; Deng, Zhi; Liu, Yinong; Shao, Yiping

    2011-01-01

    A new signal processing method for PET application has been developed, with discrete circuit components to measure energy and timing of a gamma interaction based solely on digital timing processing without using an amplitude-to-digital convertor (ADC) or a constant fraction discriminator (CFD). A single channel discrete component time-based readout (TBR) circuit was implemented in a PC board. Initial circuit functionality and performance evaluations have been conducted. Accuracy and linearity of signal amplitude measurement were excellent, as measured with test pulses. The measured timing accuracy from test pulses reached to less than 300 ps, a value limited mainly by the timing jitter of the prototype electronics circuit. Both suitable energy and coincidence timing resolutions (~18% and ~1.0 ns) have been achieved with 3 × 3 × 20 mm3 LYSO scintillator and photomultiplier tube-based detectors. With its relatively simple circuit and low cost, TBR is expected to be a suitable front-end signal readout electronics for compact PET or other radiation detectors requiring the reading of a large number of detector channels and demanding high performance for energy and timing measurement. PMID:21743761

  11. Michigan's Physician Group Incentive Program offers a regional model for incremental 'fee for value' payment reform.

    PubMed

    Share, David A; Mason, Margaret H

    2012-09-01

    Blue Cross Blue Shield of Michigan partnered with providers across the state to create an innovative, "fee for value" physician incentive program that would deliver high-quality, efficient care. The Physician Group Incentive Program rewards physician organizations-formal groups of physicians and practices that can accept incentive payments on behalf of their members-based on the number of quality and utilization measures they adopt, such as generic drug dispensing rates, and on their performance on these measures across their patient populations. Physicians also receive payments for implementing a range of patient-centered medical home capabilities, such as patient registries, and they receive higher fees for office visits for incorporating these capabilities into routine practice while also improving performance. Taken together, the incentive dollars, fee increases, and care management payments amount to a potential increase in reimbursement of 40 percent or more from Blue Cross Blue Shield of Michigan for practices designated as high-performing patient-centered medical homes. At the same time, we estimate that implementing the patient-centered medical home capabilities was associated with $155 million in lower medical costs in program year 2011 for Blue Cross Blue Shield of Michigan members. We intend to devote a higher percentage of reimbursement over time to communities of caregivers that offer high-value, system-based care, and a lower percentage of reimbursement to individual physicians on a service-specific basis.

  12. Crossover and maximal fat-oxidation points in sedentary healthy subjects: methodological issues.

    PubMed

    Gmada, N; Marzouki, H; Haboubi, M; Tabka, Z; Shephard, R J; Bouhlel, E

    2012-02-01

    Our study aimed to assess the influence of protocol on the crossover point and maximal fat-oxidation (LIPOX(max)) values in sedentary, but otherwise healthy, young men. Maximal oxygen intake was assessed in 23 subjects, using a progressive maximal cycle ergometer test. Twelve sedentary males (aged 20.5±1.0 years) whose directly measured maximal aerobic power (MAP) values were lower than their theoretical maximal values (tMAP) were selected from this group. These individuals performed, in random sequence, three submaximal graded exercise tests, separated by three-day intervals; work rates were based on the tMAP in one test and on MAP in the remaining two. The third test was used to assess the reliability of data. Heart rate, respiratory parameters, blood lactate, the crossover point and LIPOX(max) values were measured during each of these tests. The crossover point and LIPOX(max) values were significantly lower when the testing protocol was based on tMAP rather than on MAP (P<0.001). Respiratory exchange ratios were significantly lower with MAP than with tMAP at 30, 40, 50 and 60% of maximal aerobic power (P<0.01). At the crossover point, lactate and 5-min postexercise oxygen consumption (EPOC(5 min)) values were significantly higher using tMAP rather than MAP (P<0.001). During the first 5 min of recovery, EPOC(5 min) and blood lactate were significantly correlated (r=0.89; P<0.001). Our data show that, to assess the crossover point and LIPOX(max) values for research purposes, the protocol must be based on the measured MAP rather than on a theoretical value. Such a determination should improve individualization of training for initially sedentary subjects. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  13. A New Method for the Evaluation and Prediction of Base Stealing Performance.

    PubMed

    Bricker, Joshua C; Bailey, Christopher A; Driggers, Austin R; McInnis, Timothy C; Alami, Arya

    2016-11-01

    Bricker, JC, Bailey, CA, Driggers, AR, McInnis, TC, and Alami, A. A new method for the evaluation and prediction of base stealing performance. J Strength Cond Res 30(11): 3044-3050, 2016-The purposes of this study were to evaluate a new method using electronic timing gates to monitor base stealing performance in terms of reliability, differences between it and traditional stopwatch-collected times, and its ability to predict base stealing performance. Twenty-five healthy collegiate baseball players performed maximal effort base stealing trials with a right and left-handed pitcher. An infrared electronic timing system was used to calculate the reaction time (RT) and total time (TT), whereas coaches' times (CT) were recorded with digital stopwatches. Reliability of the TGM was evaluated with intraclass correlation coefficients (ICCs) and coefficient of variation (CV). Differences between the TGM and traditional CT were calculated with paired samples t tests Cohen's d effect size estimates. Base stealing performance predictability of the TGM was evaluated with Pearson's bivariate correlations. Acceptable relative reliability was observed (ICCs 0.74-0.84). Absolute reliability measures were acceptable for TT (CVs = 4.4-4.8%), but measures were elevated for RT (CVs = 32.3-35.5%). Statistical and practical differences were found between TT and CT (right p = 0.00, d = 1.28 and left p = 0.00, d = 1.49). The TGM TT seems to be a decent predictor of base stealing performance (r = -0.49 to -0.61). The authors recommend using the TGM used in this investigation for athlete monitoring because it was found to be reliable, seems to be more precise than traditional CT measured with a stopwatch, provides an additional variable of value (RT), and may predict future performance.

  14. Value-based medicine: evidence-based medicine and beyond.

    PubMed

    Brown, Gary C; Brown, Melissa M; Sharma, Sanjay

    2003-09-01

    Value-based medicine is the practice of medicine emphasizing the value received from an intervention. Value is measured by objectively quantifying: 1) the improvement in quality of life and/or 2) the improvement in length of life conferred by an intervention. Evidence-based medicine often measures the improvement gained in length of life, but generally ignores the importance of quality of life improvement or loss. Value-based medicine incorporates the best features of evidence-based medicine and takes evidence-based data to a higher level by incorporating the quality of life perceptions of patients with a disease in concerning the value of an intervention. Inherent in value-based medicine are the costs associated with an intervention. The resources expended for the value gained in value-based medicine is measured with cost-utility analysis in terms of the US dollars/QALY (money spent per quality-adjusted life-year gained). A review of the current status and the likely future of value-based medicine is addressed herein.

  15. Strategic Measures of Teacher Performance

    ERIC Educational Resources Information Center

    Milanowski, Anthony

    2011-01-01

    Managing the human capital in education requires measuring teacher performance. To measure performance, administrators need to combine measures of practice with measures of outcomes, such as value-added measures, and three measurement systems are needed: classroom observations, performance assessments or work samples, and classroom walkthroughs.…

  16. Deep breathing exercises performed 2 months following cardiac surgery: a randomized controlled trial.

    PubMed

    Westerdahl, Elisabeth; Urell, Charlotte; Jonsson, Marcus; Bryngelsson, Ing-Liss; Hedenström, Hans; Emtner, Margareta

    2014-01-01

    Postoperative breathing exercises are recommended to cardiac surgery patients. Instructions concerning how long patients should continue exercises after discharge vary, and the significance of treatment needs to be determined. Our aim was to assess the effects of home-based deep breathing exercises performed with a positive expiratory pressure device for 2 months following cardiac surgery. The study design was a prospective, single-blinded, parallel-group, randomized trial. Patients performing breathing exercises 2 months after cardiac surgery (n = 159) were compared with a control group (n = 154) performing no breathing exercises after discharge. The intervention consisted of 30 slow deep breaths performed with a positive expiratory pressure device (10-15 cm H2O), 5 times a day, during the first 2 months after surgery. The outcomes were lung function measurements, oxygen saturation, thoracic excursion mobility, subjective perception of breathing and pain, patient-perceived quality of recovery (40-Item Quality of Recovery score), health-related quality of life (36-Item Short Form Health Survey), and self-reported respiratory tract infection/pneumonia and antibiotic treatment. Two months postoperatively, the patients had significantly reduced lung function, with a mean decrease in forced expiratory volume in 1 second to 93 ± 12% (P< .001) of preoperative values. Oxygenation had returned to preoperative values, and 5 of 8 aspects in the 36-Item Short Form Health Survey were improved compared with preoperative values (P< .01). There were no significant differences between the groups in any of the measured outcomes. No significant differences in lung function, subjective perceptions, or quality of life were found between patients performing home-based deep breathing exercises and control patients 2 months after cardiac surgery.

  17. DSN 70-meter antenna microwave optics design and performance improvements. Part 2: Comparison with measurements

    NASA Technical Reports Server (NTRS)

    Bathker, D. A.; Slobin, S. D.

    1989-01-01

    The measured Deep Space Network (DSN) 70-meter antenna performance at S- and X-bands is compared with the design expectations. A discussion of natural radio-source calibration standards is given. New estimates of DSN 64-meter antenna performance are given, based on improved values of calibration source flux and size correction. A comparison of the 64- and 70-meter performances shows that average S-band peak gain improvement is 1.94 dB, compared with a design expectation of 1.77 dB. At X-band, the average peak gain improvement is 2.12 dB, compared with the (coincidentally similar) design expectation of 1.77 dB. The average measured 70-meter S-band peak gain exceeds the nominal design-expected gain by 0.02 dB; the average measured 70-meter X-band peak gain is 0.14 dB below the nominal design-expected gain.

  18. Guiding principles and checklist for population-based quality metrics.

    PubMed

    Krishnan, Mahesh; Brunelli, Steven M; Maddux, Franklin W; Parker, Thomas F; Johnson, Douglas; Nissenson, Allen R; Collins, Allan; Lacson, Eduardo

    2014-06-06

    The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. Copyright © 2014 by the American Society of Nephrology.

  19. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  20. Prediction of successful memory encoding based on single-trial rhinal and hippocampal phase information.

    PubMed

    Höhne, Marlene; Jahanbekam, Amirhossein; Bauckhage, Christian; Axmacher, Nikolai; Fell, Juergen

    2016-10-01

    Mediotemporal EEG characteristics are closely related to long-term memory formation. It has been reported that rhinal and hippocampal EEG measures reflecting the stability of phases across trials are better suited to distinguish subsequently remembered from forgotten trials than event-related potentials or amplitude-based measures. Theoretical models suggest that the phase of EEG oscillations reflects neural excitability and influences cellular plasticity. However, while previous studies have shown that the stability of phase values across trials is indeed a relevant predictor of subsequent memory performance, the effect of absolute single-trial phase values has been little explored. Here, we reanalyzed intracranial EEG recordings from the mediotemporal lobe of 27 epilepsy patients performing a continuous word recognition paradigm. Two-class classification using a support vector machine was performed to predict subsequently remembered vs. forgotten trials based on individually selected frequencies and time points. We demonstrate that it is possible to successfully predict single-trial memory formation in the majority of patients (23 out of 27) based on only three single-trial phase values given by a rhinal phase, a hippocampal phase, and a rhinal-hippocampal phase difference. Overall classification accuracy across all subjects was 69.2% choosing frequencies from the range between 0.5 and 50Hz and time points from the interval between -0.5s and 2s. For 19 patients, above chance prediction of subsequent memory was possible even when choosing only time points from the prestimulus interval (overall accuracy: 65.2%). Furthermore, prediction accuracies based on single-trial phase surpassed those based on single-trial power. Our results confirm the functional relevance of mediotemporal EEG phase for long-term memory operations and suggest that phase information may be utilized for memory enhancement applications based on deep brain stimulation. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Dosimetric validation and clinical implementation of two 3D dose verification systems for quality assurance in volumetric-modulated arc therapy techniques.

    PubMed

    Clemente-Gutiérrez, Francisco; Pérez-Vara, Consuelo

    2015-03-08

    A pretreatment quality assurance program for volumetric techniques should include redundant calculations and measurement-based verifications. The patient-specific quality assurance process must be based in clinically relevant metrics. The aim of this study was to show the commission, clinical implementation, and comparison of two systems that allow performing a 3D redundant dose calculation. In addition, one of them is capable of reconstructing the dose on patient anatomy from measurements taken with a 2D ion chamber array. Both systems were compared in terms of reference calibration data (absolute dose, output factors, percentage depth-dose curves, and profiles). Results were in good agreement for absolute dose values (discrepancies were below 0.5%) and output factors (mean differences were below 1%). Maximum mean discrepancies were located between 10 and 20 cm of depth for PDDs (-2.7%) and in the penumbra region for profiles (mean DTA of 1.5 mm). Validation of the systems was performed by comparing point-dose measurements with values obtained by the two systems for static, dynamic fields from AAPM TG-119 report, and 12 real VMAT plans for different anatomical sites (differences better than 1.2%). Comparisons between measurements taken with a 2D ion chamber array and results obtained by both systems for real VMAT plans were also performed (mean global gamma passing rates better than 87.0% and 97.9% for the 2%/2 mm and 3%/3 mm criteria). Clinical implementation of the systems was evaluated by comparing dose-volume parameters for all TG-119 tests and real VMAT plans with TPS values (mean differences were below 1%). In addition, comparisons between dose distributions calculated by TPS and those extracted by the two systems for real VMAT plans were also performed (mean global gamma passing rates better than 86.0% and 93.0% for the 2%/2 mm and 3%/ 3 mm criteria). The clinical use of both systems was successfully evaluated.

  2. A vessel length-based method to compute coronary fractional flow reserve from optical coherence tomography images.

    PubMed

    Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo

    2017-06-26

    Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.

  3. Fitness prospects: effects of age, sex and recruitment age on reproductive value in a long-lived seabird.

    PubMed

    Zhang, He; Rebke, Maren; Becker, Peter H; Bouwhuis, Sandra

    2015-01-01

    Reproductive value is an integrated measure of survival and reproduction fundamental to understanding life-history evolution and population dynamics, but little is known about intraspecific variation in reproductive value and factors explaining such variation, if any. By applying generalized additive mixed models to longitudinal individual-based data of the common tern Sterna hirundo, we estimated age-specific annual survival probability, breeding probability and reproductive performance, based on which we calculated age-specific reproductive values. We investigated effects of sex and recruitment age (RA) on each trait. We found age effects on all traits, with survival and breeding probability declining with age, while reproductive performance first improved with age before levelling off. We only found a very small, marginally significant, sex effect on survival probability, but evidence for decreasing age-specific breeding probability and reproductive performance with RA. As a result, males had slightly lower age-specific reproductive values than females, while birds of both sexes that recruited at the earliest ages of 2 and 3 years (i.e. 54% of the tern population) had somewhat higher fitness prospects than birds recruiting at later ages. While the RA effects on breeding probability and reproductive performance were statistically significant, these effects were not large enough to translate to significant effects on reproductive value. Age-specific reproductive values provided evidence for senescence, which came with fitness costs in a range of 17-21% for the sex-RA groups. Our study suggests that intraspecific variation in reproductive value may exist, but that, in the common tern, the differences are small. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.

  4. A Comparison of Inventoried and Measured U.S. Urban/Industrial Hg Emission Factors during the NOMADSS Experiment

    NASA Astrophysics Data System (ADS)

    Ambrose, J. L., II; Gratz, L.; Jaffe, D. A.; Apel, E. C.; Campos, T. L.; Flocke, F. M.; Guenther, A. B.; Hornbrook, R. S.; Karl, T.; Kaser, L.; Knapp, D. J.; Weinheimer, A. J.; Cantrell, C. A.; Mauldin, L.; Yuan, B.

    2014-12-01

    We performed an airborne survey of some large anthropogenic mercury (Hg) emission sources in the Southeast U.S. during the 2013 Nitrogen, Oxidants, Mercury and Aerosol Distribution, Sources, and Sinks (NOMADSS) experiment. The observations included speciated atmospheric Hg, and tracers of urban/industrial emissions and associated photochemistry (e.g., carbon monoxide, CO; carbon dioxide, CO2; sulfur dioxide, SO2; nitrogen oxides (NOx); volatile organic compounds, VOCs; ozone, O3; hydroxyl radical, HO·; sulfuric acid, H2SO4) and were made from the National Science Foundation's/National Center for Atmospheric Research's C-130 research aircraft. Mercury was measured using the University of Washington's Detector for Oxidized Hg Species. We derived Hg emission factors (EF) for several U.S. urban areas and large industrial point sources, including coal-fired power plants (CFPPs) in Louisiana, Pennsylvania, Texas, and West Virginia. We compared our measured Hg EFs with inventory-based values from two separate Hg emission inventories provided by the U.S. Environmental Protection Agency - the National Emissions Inventory (NEI) and the Toxics Release Inventory (TRI). We also performed an inter-comparison of the inventory-based Hg EFs. For the CFPPs sampled, we find that actual Hg emissions differed from inventoried values by more than a factor of two in some cases. Measured Hg EFs were weakly correlated with values reported in the NEI: m = 0.71; r2 = 0.47 (p = 0.06; n = 8), whereas EFs derived from the TRI were not meaningfully predictive of the measured values: m = -3.3; r2 = 0.61 (p < 0.05; n = 8). Median absolute differences between measured and inventory-based EFs were ≥50%, relative to the inventory values. The median absolute average difference between the Hg EFs reported in the two inventories was approximately 40%. Our results place quantitative constraints on uncertainties associated with the inventoried Hg emissions. Additionally, our results suggest that the current formulation of the Hg emission inventories critically limits our ability to accurately predict the transport and fate of U.S. urban/industrial emissions of Hg to the atmosphere. These findings are broadly relevant to the design and use of emission inventories for industrial hazardous air pollutants.

  5. Study of ultrasonic thermometry based on ultrasonic time-of-flight measurement

    NASA Astrophysics Data System (ADS)

    Jia, Ruixi; Xiong, Qingyu; Wang, Lijie; Wang, Kai; Shen, Xuehua; Liang, Shan; Shi, Xin

    2016-03-01

    Ultrasonic thermometry is a kind of acoustic pyrometry and it has been evolving as a new temperature measurement technology for various environment. However, the accurate measurement of the ultrasonic time-of-flight is the key for ultrasonic thermometry. In this paper, we study the ultrasonic thermometry technique based on ultrasonic time-of-flight measurement with a pair of ultrasonic transducers for transmitting and receiving signal. The ultrasonic transducers are installed in a single path which ultrasonic travels. In order to validate the performance of ultrasonic thermometry, we make a contrast about the absolute error between the measured temperature value and the practical one. With and without heater source, the experimental results indicate ultrasonic thermometry has high precision of temperature measurement.

  6. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  7. Sensitivity analysis of TRX-2 lattice parameters with emphasis on epithermal /sup 238/U capture. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.

    1977-03-01

    The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less

  8. A better gauge of corporate performance.

    PubMed

    Weber, D O

    2001-01-01

    Traditional methods of measuring organizational value aren't working very well. Instead, an organization's viability should be gauged from four perspectives, according to Robert S. Kaplan and David P. Norton, co-creators of the Balanced Scorecard. These perspectives--financial strength, customer service and satisfaction, internal operating efficiency, and learning and growth--become the underpinnings of a "balanced" tool with which leaders can assess corporate performance in the knowledge-based marketplace.

  9. Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback

    NASA Astrophysics Data System (ADS)

    Mehlführer, Christian; Caban, Sebastian; Rupp, Markus

    2007-12-01

    We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.

  10. Study on fast measurement of sugar content of yogurt using Vis/NIR spectroscopy techniques

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli

    2006-09-01

    In order to measuring the sugar content of yogurt rapidly, a fast measurement of sugar content of yogurt using Vis/NIR-spectroscopy techniques was established. 25 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The sugar content of yogurt on positions scanned by spectrum were measured by a sugar content meter. The mathematical model between sugar content and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS). The correlation coefficient of sugar content based on PLS model is more than 0.894, and standard error of calibration (SEC) is 0.356, standard error of prediction (SEP) is 0.389. Through predicting the sugar content quantitatively of 35 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0.934. The results show the good to excellent prediction performance. The Vis/NIR spectroscopy technique had significantly greater accuracy for determining the sugar content. It was concluded that the Vis/NIRS measurement technique seems reliable to assess the fast measurement of sugar content of yogurt, and a new method for the measurement of sugar content of yogurt was established.

  11. SU-E-T-610: Phosphor-Based Fiber Optic Probes for Proton Beam Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darafsheh, A; Soldner, A; Liu, H

    2015-06-15

    Purpose: To investigate feasibility of using fiber optics probes with rare-earth-based phosphor tips for proton beam radiation dosimetry. We designed and fabricated a fiber probe with submillimeter resolution (<0.5 mm3) based on TbF3 phosphors and evaluated its performance for measurement of proton beam including profiles and range. Methods: The fiber optic probe with TbF3 phosphor tip, embedded in tissue-mimicking phantoms was irradiated with double scattering proton beam with energy of 180 MeV. Luminescence spectroscopy was performed by a CCD-coupled spectrograph to analyze the emission spectra of the fiber tip. In order to measure the spatial beam profile and percentage depthmore » dose, we used singular value decomposition method to spectrally separate the phosphors ionoluminescence signal from the background Cerenkov radiation signal. Results: The spectra of the TbF3 fiber probe showed characteristic ionoluminescence emission peaks at 489, 542, 586, and 620 nm. By using singular value decomposition we found the contribution of the ionoluminescence signal to measure the percentage depth dose in phantoms and compared that with measurements performed with ion chamber. We observed quenching effect at the spread out Bragg peak region, manifested as under-responding of the signal, due to the high LET of the beam. However, the beam profiles were not dramatically affected by the quenching effect. Conclusion: We have evaluated the performance of a fiber optic probe with submillimeter resolution for proton beam dosimetry. We demonstrated feasibility of spectral separation of the Cerenkov radiation from the collected signal. Such fiber probes can be used for measurements of proton beams profile and range. The experimental apparatus and spectroscopy method developed in this work provide a robust platform for characterization of proton-irradiated nanophosphor particles for ultralow fluence photodynamic therapy or molecular imaging applications.« less

  12. Integrated Strategic Planning in a Learning-Centered Community College

    ERIC Educational Resources Information Center

    Kelley, Susan; Kaufman, Roger

    2007-01-01

    In learning-centered community colleges, planning, like all processes, must measurably improve learning and learner performance. This article shares Valencia Community College's approach to revising its strategic planning process based on the Organizational Elements Model to: 1) focus strategic planning on learning results that add value for…

  13. Examining Construct Validity of the Quantitative Literacy VALUE Rubric in College-Level STEM Assignments

    ERIC Educational Resources Information Center

    Gray, Julie S.; Brown, Melissa A.; Connolly, John P.

    2017-01-01

    Data-driven decision making is increasingly viewed as essential in a globally competitive society. Initiatives to augment standardized testing with performance-based assessment have increased as educators progressively respond to mandates for authentic measurement of student attainment. To meet this challenge, multidisciplinary rubrics were…

  14. Shifting Gears: Standards, Assessments, Curriculum, & Instruction.

    ERIC Educational Resources Information Center

    Dougherty, Eleanor

    This book is designed to help educators move from a system that measures students against students to one that values mastery of central concepts and skills, striving for proficiency in publicly acknowledged standards of academic performance. It aims to connect the operative parts of standards-based education (standards, assessment, curriculum,…

  15. Design of control laws for flutter suppression based on the aerodynamic energy concept and comparisons with other design methods

    NASA Technical Reports Server (NTRS)

    Nissim, Eli

    1990-01-01

    The aerodynamic energy method is used to synthesize control laws for NASA's drone for aerodynamic and structural testing-aerodynamic research wing 1 (DAST-ARW1) mathematical model. The performance of these control laws in terms of closed-loop flutter dynamic pressure, control surface activity, and robustness is compared with other control laws that relate to the same model. A control law synthesis technique that makes use of the return difference singular values is developed. It is based on the aerodynamic energy approach and is shown to yield results that are superior to those results given in the literature and are based on optimal control theory. Nyquist plots are presented, together with a short discussion regarding the relative merits of the minimum singular value as a measure of robustness as compared with the more traditional measure involving phase and gain margins.

  16. Design of control laws for flutter suppression based on the aerodynamic energy concept and comparisons with other design methods

    NASA Technical Reports Server (NTRS)

    Nissim, E.

    1989-01-01

    The aerodynamic energy method is used in this paper to synthesize control laws for NASA's Drone for Aerodynamic and Structural Testing-Aerodynamic Research Wing 1 (DAST-ARW1) mathematical model. The performance of these control laws in terms of closed-loop flutter dynamic pressure, control surface activity, and robustness is compared against other control laws that appear in the literature and relate to the same model. A control law synthesis technique that makes use of the return difference singular values is developed in this paper. it is based on the aerodynamic energy approach and is shown to yield results superior to those given in the literature and based on optimal control theory. Nyquist plots are presented together with a short discussion regarding the relative merits of the minimum singular value as a measure of robustness, compared with the more traditional measure of robustness involving phase and gain margins.

  17. [Nitrogen stress measurement of canola based on multi-spectral charged coupled device imaging sensor].

    PubMed

    Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong

    2006-09-01

    Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.

  18. Elastic Moduli of Pyrolytic Boron Nitride Measured Using 3-Point Bending and Ultrasonic Testing

    NASA Technical Reports Server (NTRS)

    Kaforey, M. L.; Deeb, C. W.; Matthiesen, D. H.; Roth, D. J.

    1999-01-01

    Three-point bending and ultrasonic testing were performed on a flat plate of PBN. In the bending experiment, the deformation mechanism was believed to be shear between the pyrolytic layers, which yielded a shear modulus, c (sub 44), of 2.60 plus or minus .31 GPa. Calculations based on the longitudinal and shear wave velocity measurements yielded values of 0.341 plus or minus 0.006 for Poisson's ratio, 10.34 plus or minus .30 GPa for the elastic modulus (c (sub 33)), and 3.85 plus or minus 0.02 GPa for the shear modulus (c (sub 44)). Since free basal dislocations have been reported to affect the value of c (sub 44) found using ultrasonic methods, the value from the bending experiment was assumed to be the more accurate value.

  19. Value-Based Payment Reform and the Medicare Access and Children's Health Insurance Program Reauthorization Act of 2015: A Primer for Plastic Surgeons.

    PubMed

    Squitieri, Lee; Chung, Kevin C

    2017-07-01

    In 2015, the U.S. Congress passed the Medicare Access and Children's Health Insurance Program Reauthorization Act, which effectively repealed the Centers for Medicare and Medicaid Services sustainable growth rate formula and established the Centers for Medicare and Medicaid Services Quality Payment Program. The Medicare Access and Children's Health Insurance Program Reauthorization Act represents an unparalleled acceleration toward value-based payment models and a departure from traditional volume-driven fee-for-service reimbursement. The Quality Payment Program includes two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. The Merit-Based Incentive Payment System pathway replaces existing quality reporting programs and adds several new measures to create a composite performance score for each provider (or provider group) that will be used to adjust reimbursed payment. The advanced alternative payment model pathway is available to providers who participate in qualifying Advanced Alternative Payment Models and is associated with an initial 5 percent payment incentive. The first performance period for the Merit-Based Incentive Payment System opens January 1, 2017, and closes on December 31, 2017, and is associated with payment adjustments in January of 2019. The Centers for Medicare and Medicaid Services estimates that the majority of providers will begin participation in 2017 through the Merit-Based Incentive Payment System pathway, but aims to have 50 percent of payments tied to quality or value through Advanced Alternative Payment Models by 2018. In this article, the authors describe key components of the Medicare Access and Children's Health Insurance Program Reauthorization Act to providers navigating through the Quality Payment Program and discuss how plastic surgeons may optimize their performance in this new value-based payment program.

  20. Manifold absolute pressure estimation using neural network with hybrid training algorithm

    PubMed Central

    Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779

  1. Flow performance in MPD at NICA

    NASA Astrophysics Data System (ADS)

    Svintsov, I. A.; Parfenov, P. E.; Selyuzhenkov, I. V.; Taranenko, A. V.

    2017-01-01

    The Nuclotron-based Ion Collider facility (NICA) in Dubna, Russia is currently under construction at the Joint Institute for Nuclear Research (JINR). A Multi Purpose Detector (MPD) at NICA is designed to study properties of baryonic dense matter in the range of center of mass collision energy from 4 to 11 GeV. We present a performance study for anisotropic transverse flow measurement in Au+Au collisions using the UrQMD event generator and Geant4 simulation of the MPD response. The collision symmetry plane is estimated from event-by-event transverse energy distribution in Forward Hadron Calorimeters (FHCal’s). Performance of the MPD for a measurement of the directed (v 1) and elliptic (v 2) flow of identified charged hadrons is evaluated based on comparison between reconstructed v 1 and v 2 values and the input one from the UrQMD model.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marion, Bill; Smith, Benjamin

    Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. The method used back-solves for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the micro-inverter ac production data. When the derived values of DNI and DHI were then used to model the performance of other PV systems, the annual mean bias deviations were within +/- 4%, and only 1% greater than when the PV performance was modeled using high quality irradiance measurements. An uncertainty analysis showsmore » the method better suited for modeling PV performance than using satellite-based global horizontal irradiance.« less

  3. Optimizing value utilizing Toyota Kata methodology in a multidisciplinary clinic.

    PubMed

    Merguerian, Paul A; Grady, Richard; Waldhausen, John; Libby, Arlene; Murphy, Whitney; Melzer, Lilah; Avansino, Jeffrey

    2015-08-01

    Value in healthcare is measured in terms of patient outcomes achieved per dollar expended. Outcomes and cost must be measured at the patient level to optimize value. Multidisciplinary clinics have been shown to be effective in providing coordinated and comprehensive care with improved outcomes, yet tend to have higher cost than typical clinics. We sought to lower individual patient cost and optimize value in a pediatric multidisciplinary reconstructive pelvic medicine (RPM) clinic. The RPM clinic is a multidisciplinary clinic that takes care of patients with anomalies of the pelvic organs. The specialties involved include Urology, General Surgery, Gynecology, and Gastroenterology/Motility. From May 2012 to November 2014 we performed time-driven activity-based costing (TDABC) analysis by measuring provider time for each step in the patient flow. Using observed time and the estimated hourly cost of each of the providers we calculated the final cost at the individual patient level, targeting clinic preparation. We utilized Toyota Kata methodology to enhance operational efficiency in an effort to optimize value. Variables measured included cost, time to perform a task, number of patients seen in clinic, percent value-added time (VAT) to patients (face to face time) and family experience scores (FES). At the beginning of the study period, clinic costs were $619 per patient. We reduced conference time from 6 min/patient to 1 min per patient, physician preparation time from 8 min to 6 min and increased Medical Assistant (MA) preparation time from 9.5 min to 20 min, achieving a cost reduction of 41% to $366 per patient. Continued improvements further reduced the MA preparation time to 14 min and the MD preparation time to 5 min with a further cost reduction to $194 (69%) (Figure). During this study period, we increased the number of appointments per clinic. We demonstrated sustained improvement in FES with regards to the families overall experience with their providers. Value added time was increased from 60% to 78% but this was not significant. Time-based cost analysis effectively measures individualized patient cost. We achieved a 69% reduction in clinic preparation costs. Despite this reduction in costs, we were able to maintain VAT and sustain improvements in family experience. In caring for complex patients, lean management methodology enables optimization of value in a multidisciplinary clinic. Copyright © 2015. Published by Elsevier Ltd.

  4. Safety-net hospitals more likely than other hospitals to fare poorly under Medicare's value-based purchasing.

    PubMed

    Gilman, Matlin; Adams, E Kathleen; Hockenberry, Jason M; Milstein, Arnold S; Wilson, Ira B; Becker, Edmund R

    2015-03-01

    Medicare's value-based purchasing (VBP) program potentially puts safety-net hospitals at a financial disadvantage compared to other hospitals. In 2014, the second year of the program, patient mortality measures were added to the VBP program's algorithm for assigning penalties and rewards. We examined whether the inclusion of mortality measures in the second year of the program had a disproportionate impact on safety-net hospitals nationally. We found that safety-net hospitals were more likely than other hospitals to be penalized under the VBP program as a result of their poorer performance on process and patient experience scores. In 2014, 63 percent of safety-net hospitals versus 51 percent of all other sample hospitals received payment rate reductions under the program. However, safety-net hospitals' performance on mortality measures was comparable to that of other hospitals, with an average VBP survival score of thirty-two versus thirty-one among other hospitals. Although safety-net hospitals are still more likely than other hospitals to fare poorly under the VBP program, increasing the weight given to mortality in the VBP payment algorithm would reduce this disadvantage. Project HOPE—The People-to-People Health Foundation, Inc.

  5. Commissioning and quality assurance for the treatment delivery components of the AccuBoost system.

    PubMed

    Iftimia, Ileana; Talmadge, Mike; Ladd, Ron; Halvorsen, Per

    2015-03-08

    The objective for this work was to develop a commissioning methodology for the treatment delivery components of the AccuBoost system, as well as to establish a routine quality assurance program and appropriate guidance for clinical use based on the commissioning results. Various tests were developed: 1) assessment of the accuracy of the displayed separation value; 2) validation of the dwell positions within each applicator; 3) assessment of the accuracy and precision of the applicator localization system; 4) assessment of the combined dose profile of two opposed applicators to confirm that they are coaxial; 5) measurement of the absolute dose delivered with each applicator to confirm acceptable agreement with dose based on Monte Carlo modeling; 6) measurements of the skin-to-center dose ratio using optically stimulated luminescence dosimeters; and 7) assessment of the mammopad cushion's effect on the center dose. We found that the difference between the measured and the actual paddle separation is < 0.1 cm for the separation range of 3 cm to 7.5 cm. Radiochromic film measurements demonstrated that the number of dwell positions inside the applicators agree with the values from the vendor, for each applicator type and size. The shift needed for a good applicator-grid alignment was within 0.2 cm. The dry-run test using film demonstrated that the shift of the dosimetric center is within 0.15 cm. Dose measurements in water converted to polystyrene agreed within 5.0% with the Monte Carlo data in polystyrene for the same applicator type, size, and depth. A solid water-to-water (phantom) factor was obtained for each applicator, and all future annual quality assurance tests will be performed in solid water using an average value of 1.07 for the solid water-to-water factor. The skin-to-center dose ratio measurements support the Monte Carlo-based values within 5.0% agreement. For the treatment separation range of 4 cm to 8cm, the change in center dose would be < 1.0% for all applicators when using a compressed pad of 0.2 cm to 0.3 cm. The tests performed ensured that all treatment components of the AccuBoost system are functional and that a treatment plan can be delivered with acceptable accuracy. Based on the commissioning results, a quality assurance manual and guidance documents for clinical use were developed.

  6. An ultra-sensitive DeltaR/R measurement system for biochemical sensors using piezoresistive micro-cantilevers.

    PubMed

    Nag, Sudip; Kale, Nitin S; Rao, V; Sharma, Dinesh K

    2009-01-01

    Piezoresistive micro-cantilevers are interesting bio-sensing tool whose base resistance value (R) changes by a few parts per million (DeltaR) in deflected conditions. Measuring such a small deviation is always being a challenge due to noise. An advanced and reliable DeltaR/R measurement scheme is presented in this paper which can sense resistance changes down to 6 parts per million. The measurement scheme includes the half-bridge connected micro-cantilevers with mismatch compensation, precision op-amp based filters and amplifiers, and a lock-in amplifier based detector. The input actuating sine wave is applied from a function generator and the output dc voltage is displayed on a digital multimeter. The calibration is performed and instrument sensitivity is calculated. An experimental set-up using a probe station is discussed that demonstrates a combined performance of the measurement system and SU8-polysilicon cantilevers. The deflection sensitivity of such polymeric cantilevers is calculated. The system will be highly useful to detect bio-markers such as myoglobin and troponin that are released in blood during or after heart attacks.

  7. Assessing Therapist Competence: Development of a Performance-Based Measure and Its Comparison With a Web-Based Measure.

    PubMed

    Cooper, Zafra; Doll, Helen; Bailey-Straebler, Suzanne; Bohn, Kristin; de Vries, Dian; Murphy, Rebecca; O'Connor, Marianne E; Fairburn, Christopher G

    2017-10-31

    Recent research interest in how best to train therapists to deliver psychological treatments has highlighted the need for rigorous, but scalable, means of measuring therapist competence. There are at least two components involved in assessing therapist competence: the assessment of their knowledge of the treatment concerned, including how and when to use its strategies and procedures, and an evaluation of their ability to apply such knowledge skillfully in practice. While the assessment of therapists' knowledge has the potential to be completed efficiently on the Web, the assessment of skill has generally involved a labor-intensive process carried out by clinicians, and as such, may not be suitable for assessing training outcome in certain circumstances. The aims of this study were to develop and evaluate a role-play-based measure of skill suitable for assessing training outcome and to compare its performance with a highly scalable Web-based measure of applied knowledge. Using enhanced cognitive behavioral therapy (CBT-E) for eating disorders as an exemplar, clinical scenarios for role-play assessment were developed and piloted together with a rating scheme for assessing trainee therapists' performance. These scenarios were evaluated by examining the performance of 93 therapists from different professional backgrounds and at different levels of training in implementing CBT-E. These therapists also completed a previously developed Web-based measure of applied knowledge, and the ability of the Web-based measure to efficiently predict competence on the role-play measure was investigated. The role-play measure assessed performance at implementing a range of CBT-E procedures. The majority of the therapists rated their performance as moderately or closely resembling their usual clinical performance. Trained raters were able to achieve good-to-excellent reliability for averaged competence, with intraclass correlation coefficients ranging from .653 to 909. The measure was also sensitive to change, with scores being significantly higher after training than before as might be expected (mean difference 0.758, P<.001) even when taking account of repeated data (mean difference 0.667, P<.001). The major shortcoming of the role-play measure was that it required considerable time and resources. This shortcoming is inherent in the method. Given this, of most interest for assessing training outcome, scores on the Web-based measure efficiently predicted therapist competence, as judged by the role-play measure (with the Web-based measure having a positive predictive value of 77% and specificity of 78%). The results of this study suggest that while it was feasible and acceptable to assess performance using the newly developed role-play measure, the highly scalable Web-based measure could be used in certain circumstances as a substitute for the more labor-intensive, and hence, more costly role-play method. ©Zafra Cooper, Helen Doll, Suzanne Bailey-Straebler, Kristin Bohn, Dian de Vries, Rebecca Murphy, Marianne E O'Connor, Christopher G Fairburn. Originally published in JMIR Mental Health (http://mental.jmir.org), 31.10.2017.

  8. Gas exchange rates measured using a dual-tracer (SF6 and3he) method in the coastal waters of Korea

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Woo; Lee, Kitack; Kaown, Duk-In

    2008-03-01

    Over a period of 5 days between August 12 and 17, 2005, we performed a gas exchange experiment using the dual tracer method in a tidal coastal ocean located off the southern coast of Korea. The gas exchange rate was determined from temporal changes in the ratio of3He to SF6 measured daily in the surface mixed layer. The measured gas exchange rate ( k CO 2), normalized to a Schmidt number of 600 for CO2 in fresh water at 20°C, was approximately 5.0 cm h-1 at a mean wind speed of 3.9 m s-1 during the study period. This value is significantly less than those obtained from floating chamber-based experiments performed previously in estuarine environments, but is similar in magnitude to values obtained using the dual tracer method in river and tidal coastal waters and values predicted on the basis of the relationship between the gas exchange rate and wind speed (Wanninkhof 1992), which is generally applicable to the open ocean. Our result is also consistent with the relationship of Raymond and Cole (2001), which was derived from experiments carried out in estuarine environments using222Rn and chlorofluorocarbons along with measurements undertaken in the Hudson River, Canada, using SF6 and3He. Our results indicate that tidal action in a microtidal region did not discernibly enhance the measured k CO 2 value.

  9. Estimation of 1RM for knee extension based on the maximal isometric muscle strength and body composition.

    PubMed

    Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo

    2017-11-01

    [Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.

  10. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  11. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  12. Evaluating droplet digital PCR for the quantification of human genomic DNA: converting copies per nanoliter to nanograms nuclear DNA per microliter.

    PubMed

    Duewer, David L; Kline, Margaret C; Romsos, Erica L; Toman, Blaza

    2018-05-01

    The highly multiplexed polymerase chain reaction (PCR) assays used for forensic human identification perform best when used with an accurately determined quantity of input DNA. To help ensure the reliable performance of these assays, we are developing a certified reference material (CRM) for calibrating human genomic DNA working standards. To enable sharing information over time and place, CRMs must provide accurate and stable values that are metrologically traceable to a common reference. We have shown that droplet digital PCR (ddPCR) limiting dilution end-point measurements of the concentration of DNA copies per volume of sample can be traceably linked to the International System of Units (SI). Unlike values assigned using conventional relationships between ultraviolet absorbance and DNA mass concentration, entity-based ddPCR measurements are expected to be stable over time. However, the forensic community expects DNA quantity to be stated in terms of mass concentration rather than entity concentration. The transformation can be accomplished given SI-traceable values and uncertainties for the number of nucleotide bases per human haploid genome equivalent (HHGE) and the average molar mass of a nucleotide monomer in the DNA polymer. This report presents the considerations required to establish the metrological traceability of ddPCR-based mass concentration estimates of human nuclear DNA. Graphical abstract The roots of metrological traceability for human nuclear DNA mass concentration results. Values for the factors in blue must be established experimentally. Values for the factors in red have been established from authoritative source materials. HHGE stands for "haploid human genome equivalent"; there are two HHGE per diploid human genome.

  13. Nonstructural urban stormwater quality measures: building a knowledge base to improve their use.

    PubMed

    Taylor, André C; Fletcher, Tim D

    2007-05-01

    This article summarizes a research project that investigated the use, performance, cost, and evaluation of nonstructural measures to improve urban stormwater quality. A survey of urban stormwater managers from Australia, New Zealand, and the United States revealed a widespread trend of increasing use of nonstructural measures among leading stormwater management agencies, with at least 76% of 41 types of nonstructural measures being found to be increasing in use. Data gathered from the survey, an international literature review, and a multicriteria analysis highlighted four nonstructural measures of greatest potential value: mandatory town planning controls that promote the adoption of low-impact development principles and techniques; development of strategic urban stormwater management plans for a city, shire, or catchment; stormwater management measures and programs for construction/building sites; and stormwater management activities related to municipal maintenance operations such as maintenance of the stormwater drainage network and manual litter collections. Knowledge gained on the use and performance of nonstructural measures from the survey, literature review, and three trial evaluation projects was used to develop tailored monitoring and evaluation guidelines for these types of measure. These guidelines incorporate a new evaluation framework based on seven alternative styles of evaluation that range from simply monitoring whether a nonstructural measure has been fully implemented to monitoring its impact on waterway health. This research helps to build the stormwater management industry's knowledge base concerning nonstructural measures and provides a practical tool to address common impediments associated with monitoring and evaluating the performance and cost of these measures.

  14. Diffraction based overlay metrology for α-carbon applications

    NASA Astrophysics Data System (ADS)

    Saravanan, Chandra Saru; Tan, Asher; Dasari, Prasad; Goelzer, Gary; Smith, Nigel; Woo, Seouk-Hoon; Shin, Jang Ho; Kang, Hyun Jae; Kim, Ho Chul

    2008-03-01

    Applications that require overlay measurement between layers separated by absorbing interlayer films (such as α- carbon) pose significant challenges for sub-50nm processes. In this paper scatterometry methods are investigated as an alternative to meet these stringent overlay metrology requirements. In this article, a spectroscopic Diffraction Based Overlay (DBO) measurement technique is used where registration errors are extracted from specially designed diffraction targets. DBO measurements are performed on detailed set of wafers with varying α-carbon (ACL) thicknesses. The correlation in overlay values between wafers with varying ACL thicknesses will be discussed. The total measurement uncertainty (TMU) requirements for these layers are discussed and the DBO TMU results from sub-50nm samples are reviewed.

  15. Modifying beliefs about back pain: A pilot study among healthcare professionals.

    PubMed

    Monnin, Dominique; Courvoisier, Delphine S; Genevay, Stéphane

    2016-04-01

    This study aimed to explore whether a preventive intervention based on the non-injury model and the biopsychosocial model of back pain succeeded in shifting beliefs toward less negative representations and in decreasing fear-avoidance beliefs related to back pain. One hundred and one healthcare professionals took part in a 10-h educational program held over 2 consecutive days, based on the key messages of the "Back Book." Baseline values were measured 6 weeks before the intervention and when it started. Follow-up was performed at the end of the intervention and six months later. No significant changes were observed between baseline values and values measured at the beginning of the intervention, but participants' beliefs about LBP changed significantly after the program. The benefit remained at 6 months follow-up. A prevention program based on the non-injury and bio-psychosocial models of LBP, introducing empowerment and problem-solving strategies, significantly reduced fear-avoidance and negative beliefs about LBP. The change was clinically relevant and thus could decrease direct and indirect healthcare costs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Biometric identification based on novel frequency domain facial asymmetry measures

    NASA Astrophysics Data System (ADS)

    Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-03-01

    In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.

  17. Airborne sound insulation evaluation and flanking path prediction of coupled room

    NASA Astrophysics Data System (ADS)

    Tassia, R. D.; Asmoro, W. A.; Arifianto, D.

    2016-11-01

    One of the parameters to review the acoustic comfort is based on the value of the insulation partition in the classroom. The insulation value can be expressed by the sound transmission loss which converted into a single value as weighted sound reduction index (Rw, DnTw) and also have an additional sound correction factor in low frequency (C, Ctr) .In this study, the measurements were performed in two positions at each point using BSWA microphone and dodecahedron speaker as the sound source. The results of field measurements indicate the acoustic insulation values (DnT w + C) is 19.6 dB. It is noted that the partition wall not according to the standard which the DnTw + C> 51 dB. Hence the partition wall need to be redesign to improve acoustic insulation in the classroom. The design used gypsum board, plasterboard, cement board, and PVC as the replacement material. Based on the results, all the material is simulated in accordance with established standards. Best insulation is cement board with the insulation value is 69dB, the thickness of 12.5 mm on each side and the absorber material is 50 mm. Many factors lead to increase the value of acoustic insulation, such as the thickness of the panel, the addition of absorber material, density, and Poisson's ratio of a material. The prediction of flanking path can be estimated from noise reduction values at each measurement point in the class room. Based on data obtained, there is no significant change in noise reduction from each point so that the pathway of flanking is not affect the sound transmission in the classroom.

  18. Method for assessing in-service motor efficiency and in-service motor/load efficiency

    DOEpatents

    Kueck, John D.; Otaduy, Pedro J.

    1997-01-01

    A method and apparatus for assessing the efficiency of an in-service motor. The operating characteristics of the in-service motor are remotely measured. The operating characteristics are then applied to an equivalent circuit for electrical motors. Finally the equivalent circuit is evaluated to determine the performance characteristics of said in-service motor. Based upon the evaluation an individual is able to determine the rotor speed, power output, efficiency, and toque of the in-service motor. Additionally, an individual is able to confirm the calculations by comparing measured values with values obtained as a result of the motor equivalent circuit evaluation.

  19. Analysis of performance measurement at HR-GR Department using the balance scorecard method

    NASA Astrophysics Data System (ADS)

    Vienni; Bachtiar, M.

    2017-12-01

    PT. X is a company engaged in logistics service in Indonesia. Every company will certainly face a dynamic business environment. Competitors not only from domestic but also from overseas. To be successful in achieving its objectives, company should have a comprehensive measurement system as a strategy feedback that will drive the performance of company. HR-GA department is department that coordinate directly with company’s management. Company through departments expect development goals in individual and also support of infrastructure will run smoothly. In 2015, company has taken steps to conduct a balanced scorecard as performance measurement. Nevertheless, a number of factors so it cannot run optimally. This study aims to analyse the current system and provided suggestions in order to give an overview to department related to its current performance. The results of data processing show that there are 8 objective strategies that have been formulated with 9 key performance indicators. Based on the results of scorecard, obtained values of 4.44 for customer perspective, 4.32 for internal business process perspective & 5.00 for learning and growth perspective. It concludes that performance based on perspectives are categorized very well

  20. A Retrospective Analysis of Post-Stroke Berg Balance Scale Scores: How Should Normal and At-Risk Scores Be Interpreted?

    PubMed Central

    Inness, Elizabeth; McIlroy, William E.; Mansfield, Avril

    2017-01-01

    Purpose: The Berg Balance Scale (BBS) is a performance-based measure of standing balance commonly used by clinicians working with individuals post-stroke. Performance on the BBS can be influenced by compensatory strategies, but measures derived from two force plates can isolate compensatory strategies and thus better indicate balance impairment. This study examined BBS scores that reflect “normal” and disordered balance with respect to dual force-plate measures of standing balance in individuals post-stroke. Methods: BBS and force-plate measures were extracted from 75 patient charts. Individuals were classified by BBS score with respect to (1) age-matched normative values and (2) values that suggested increased risk of falls. Multiple analysis of variance was used to examine the effect of group assignment on force-plate measures of standing balance. Results: Individuals with BBS scores within and below normative values did not differ in force-plate measures. Individuals with BBS scores below the falls risk cutoff loaded their affected leg less than individuals with BBS scores above the cutoff. There were no other differences in force-plate measures between these two groups. Conclusions: BBS scores indicating either normal or disordered balance function are not necessarily associated with normal or disordered quiet standing-balance control measured by two force plates. This finding suggests that the BBS may reflect a capacity for compensation rather than any underlying impairments. PMID:28539694

  1. Educational Value of Digital Whole Slides Accompanying Published Online Pathology Journal Articles: A Multi-Institutional Study.

    PubMed

    Yin, Feng; Han, Gang; Bui, Marilyn M; Gibbs, Julie; Martin, Ian; Sundharkrishnan, Lohini; King, Lauren; Jabcuga, Christine; Stuart, Lauren N; Hassell, Lewis A

    2016-07-01

    -Despite great interest in using whole slide imaging (WSI) in pathology practice and education, few pathology journals have published WSI pertinent to articles within their pages or as supplemental materials. -To evaluate whether there is measurable added educational value of including WSI in publications. -Thirty-seven participants, 16 (43.3%), 15 (40.5%), and 6 (16.2%) junior pathology residents (postgraduate year 1-2), senior pathology residents (postgraduate year 3-4), and board-certified pathologists, respectively, read a sequence of 10 journal articles on a wide range of pathology topics. A randomized subgroup also reviewed the WSI published with the articles. Both groups completed a survey tool assessing recall of text-based content and of image-based material pertinent to the diseases but not present in the fixed published images. -The group examining WSI had higher performance scores in 72% of image-based questions (36 of 50 questions) as compared with the non-WSI group. As an internal study control, the WSI group had higher performance scores in only 40% of text-based questions (6 of 15 questions). The WSI group had significantly better performance than the non-WSI group for image-based questions compared with text-based questions (P < .05, Fisher exact test). -Our study provides supporting evidence that WSI offers enhanced value to the learner beyond the text and fixed images selected by the author. We strongly encourage more journals to incorporate WSI into their publications.

  2. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  3. Electron Affinity of Phenyl-C61-Butyric Acid Methyl Ester (PCBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Bryon W.; Whitaker, James B.; Wang, Xue B.

    2013-07-25

    The gas-phase electron affinity (EA) of phenyl-C61-butyric acid methyl ester (PCBM), one of the best-performing electron acceptors in organic photovoltaic devices, is measured by lowtemperature photoelectron spectroscopy for the first time. The obtained value of 2.63(1) eV is only ca. 0.05 eV lower than that of C60 (2.68(1) eV), compared to a 0.09 V difference in their E1/2 values measured in this work by cyclic voltammetry. Literature E(LUMO) values for PCBM that are typically estimated from cyclic voltammetry, and commonly used as a quantitative measure of acceptor properties, are dispersed over a wide range between -4.3 and -3.62 eV; themore » reasons for such a huge discrepancy are analyzed here, and the protocol for reliable and consistent estimations of relative fullerene-based acceptor strength in solution is proposed.« less

  4. Kinematic Analysis and Performance Evaluation of Novel PRS Parallel Mechanism

    NASA Astrophysics Data System (ADS)

    Balaji, K.; Khan, B. Shahul Hamid

    2018-02-01

    In this paper, a 3 DoF (Degree of Freedom) novel PRS (Prismatic-Revolute- Spherical) type parallel mechanisms has been designed and presented. The combination of striaght and arc type linkages for 3 DOF parallel mechanism is introduced for the first time. The performances of the mechanisms are evaluated based on the indices such as Minimum Singular Value (MSV), Condition Number (CN), Local Conditioning Index (LCI), Kinematic Configuration Index (KCI) and Global Conditioning Index (GCI). The overall reachable workspace of all mechanisms are presented. The kinematic measure, dexterity measure and workspace analysis for all the mechanism have been evaluated and compared.

  5. Sensitivity of solar-cell performance to atmospheric variables. 1: Single cell

    NASA Technical Reports Server (NTRS)

    Klucher, T. M.

    1976-01-01

    The short circuit current of a typical silicon solar cell under direct solar radiation was measured for a range of turbidity, water vapor content, and air mass to determine the relation of the solar cell calibration value (current-to-intensity ratio) to those atmospheric variables. A previously developed regression equation was modified to describe the relation between calibration value, turbidity, water vapor content, and air mass. Based on the value of the constants obtained by a least squares fit of the data to the equation, it was found that turbidity lowers the value, while increase in water vapor increases the calibration value. Cell calibration values exhibited a change of about 6% over the range of atmospheric conditions experienced.

  6. Diffusion-weighted imaging of breast lesions: Region-of-interest placement and different ADC parameters influence apparent diffusion coefficient values.

    PubMed

    Bickel, Hubert; Pinker, Katja; Polanec, Stephan; Magometschnigg, Heinrich; Wengert, Georg; Spick, Claudio; Bogner, Wolfgang; Bago-Horvath, Zsuzsanna; Helbich, Thomas H; Baltzer, Pascal

    2017-05-01

    To investigate the influence of region-of-interest (ROI) placement and different apparent diffusion coefficient (ADC) parameters on ADC values, diagnostic performance, reproducibility and measurement time in breast tumours. In this IRB-approved, retrospective study, 149 histopathologically proven breast tumours (109 malignant, 40 benign) in 147 women (mean age 53.2) were investigated. Three radiologists independently measured minimum, mean and maximum ADC, each using three ROI placement approaches:1 - small 2D-ROI, 2 - large 2D-ROI and 3 - 3D-ROI covering the whole lesion. One reader performed all measurements twice. Median ADC values, diagnostic performance, reproducibility, and measurement time were calculated and compared between all combinations of ROI placement approaches and ADC parameters. Median ADC values differed significantly between the ROI placement approaches (p < .001). Minimum ADC showed the best diagnostic performance (AUC .928-.956), followed by mean ADC obtained from 2D ROIs (.926-.94). Minimum and mean ADC showed high intra- (ICC .85-.94) and inter-reader reproducibility (ICC .74-.94). Median measurement time was significantly shorter for the 2D ROIs (p < .001). ROI placement significantly influences ADC values measured in breast tumours. Minimum and mean ADC acquired from 2D-ROIs are useful for the differentiation of benign and malignant breast lesions, and are highly reproducible, with rapid measurement. • Region of interest placement significantly influences apparent diffusion coefficient of breast tumours. • Minimum and mean apparent diffusion coefficient perform best and are reproducible. • 2D regions of interest perform best and provide rapid measurement times.

  7. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  8. Other ways of measuring `Big G'

    NASA Astrophysics Data System (ADS)

    Rothleitner, Christian

    2016-03-01

    In 1798, the British scientist Henry Cavendish performed the first laboratory experiment to determine the gravitational force between two massive bodies. From his result, Newton's gravitational constant, G, was calculated. Cavendish's measurement principle was the torsion balance invented by John Michell some 15 years before. During the following two centuries, more than 300 new measurements followed. Although technology - and physics - developed rapidly during this time, surprisingly, most experiments were still based on the same principle. In fact, the most accurate determination of G to date is a measurement based on the torsion balance principle. Despite the fact that G was one of the first fundamental physical constants ever measured, and despite the huge number of experiments performed on it to this day, its CODATA recommended value still has the highest standard measurement uncertainty when compared to other fundamental physical constants. Even more serious is the fact that even measurements based on the same principle often do not overlap within their attributed standard uncertainties. It must be assumed that various experiments are subject to one or more unknown biases. In this talk I will present some alternative experimental setups to the torsion balance which have been performed or proposed to measure G. Although their estimated uncertainties are often higher than most torsion balance experiments, revisiting such ideas is worthwhile. Advances in technology could offer solutions to problems which were previously insurmountable, these solutions could result in lower measurement uncertainties. New measurement principles could also help to uncover hidden systematic effects.

  9. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition

    PubMed Central

    2017-01-01

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094

  10. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.

    PubMed

    Choi, Hyo-Rim; Kim, TaeYong

    2017-08-17

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.

  11. The effects of a supportive knee brace on leg performance in healthy subjects.

    PubMed

    Veldhuizen, J W; Koene, F M; Oostvogel, H J; von Thiel, T P; Verstappen, F T

    1991-12-01

    Eight healthy volunteers were fitted with a supportive knee brace (Push Brace 'Heavy') to one knee for a duration of four weeks wherein they were tested before, during and after the application to establish the effect of bracing on performance. The tests consisted of isokinetic strength measurement of knee flexion and extension, 60 meter dash, vertical jump height and a progressive horizontal treadmill test until exhaustion (Vmax) with determination of oxygen uptake, heart rate and plasma lactate concentration. Wearing the brace for one day, the performance indicators showed a decline compared with the test before application (base values). Sprint time was 4% longer (p less than 0.01) and Vmax 6% slower (p less than 0.01). Peak torque of knee flexion at 60 and 240 deg.sec-1 was 6% (p less than 0.05) respectively 9% (p less than 0.05) less. Peak extension torque at 60 deg.sec-1 was 9% less (p less than 0.05). While wearing the brace for four weeks, the test performances were practically identical to their base values. After removal of the brace, all test parameters were statistically similar to the base values. Heart rate at submaximal exercise levels was even lower (p less than 0.05). In conclusion, performance in sports with test-like exercise patterns is not affected by the brace tested. Bracing does not "weaken the knee" as it is widely believed in sports practice.

  12. An excess noise measurement system for weak responsivity avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Qiao, Liang; Dimler, Simon J.; Baharuddin, Aina N. A. P.; Green, James E.; David, John P. R.

    2018-06-01

    A system for measuring, with reduced photocurrent, the excess noise associated with the gain in avalanche photodiodes (APDs), using a transimpedance amplifier front-end and based on phase-sensitive detection is described. The system can reliably measure the excess noise power of devices, even when the un-multiplied photocurrent is low (~10 nA). This is more than one order of magnitude better than previously reported systems and represents a significantly better noise signal to noise ratio. This improvement in performance has been achieved by increasing the value of the feedback resistor and reducing the op-amp bandwidth. The ability to characterise APD performance with such low photocurrents enables the use of low power light sources such as light emitting diode rather than lasers to investigate the APD noise performance.

  13. Dynamical complexity of short and noisy time series. Compression-Complexity vs. Shannon entropy

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Balasubramanian, Karthi

    2017-07-01

    Shannon entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ETC) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series.

  14. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  15. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  16. Vitamin D status and falls, frailty, and fractures among postmenopausal Japanese women living in Hawaii.

    PubMed

    Pramyothin, P; Techasurungkul, S; Lin, J; Wang, H; Shah, A; Ross, P D; Puapong, R; Wasnich, R D

    2009-11-01

    Vitamin D status and its relationship to physical performance, falls, and fractures in 495 postmenopausal women of Japanese ancestry in Hawaii were investigated. The mean 25-hydroxyvitamin D (25-OHD) was 31.94 ng/mL. No significant association of 25-OHD was demonstrated with most outcomes, possibly due to higher 25-OHD levels in this population. In this study, we investigated vitamin D status and its relationship to physical performance, muscle strength, falls, and fractures in postmenopausal Japanese females living in Hawaii. Of 510 community-dwelling women who participated in the eighth examination of the Hawaii Osteoporosis Study, 495 were included in these analyses. Multivariate regression models were used to evaluate the relationship of 25-OHD (D(3) and total) to eight performance-based measurements, 12 activities of daily living (ADLs), and muscle strength (grip, triceps, and quadriceps). Logistic regression analyses were performed to evaluate the relationship of 25-OHD to falls, vertebral fractures, and non-vertebral fractures. The mean total 25-OHD was 31.94 +/- 9.46 ng/mL; 44% of subjects had values <30 ng/mL, while none had values <10-12 ng/mL. There was little evidence of seasonal variation. Among performance-based measures, ADLs, and strength tests, only quadriceps strength was significantly associated with total 25-OHD (p = 0.0063) and 25-OHD(3) (p = 0.0001). No significant association of 25-OHD was found with vertebral or non-vertebral fractures, or incidence of one or more falls. Lack of serum 25-OHD relationship with falls and fractures or most physical performance measures in this study may be related to the low prevalence of very low 25-OHD levels in this population.

  17. Reliability analysis of a sensitive and independent stabilometry parameter set

    PubMed Central

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  18. Reliability analysis of a sensitive and independent stabilometry parameter set.

    PubMed

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  19. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  20. Composite Quality Measures for Common Inpatient Medical Conditions

    PubMed Central

    Chen, Lena M.; Staiger, Douglas O.; Birkmeyer, John D.; Ryan, Andrew M.; Zhang, Wenying; Dimick, Justin B.

    2014-01-01

    Background Public reporting on quality aims to help patients select better hospitals. However, individual quality measures are sub-optimal in identifying superior and inferior hospitals based on outcome performance. Objective To combine structure, process, and outcome measures into an empirically-derived composite quality measure for heart failure (HF), acute myocardial infarction (AMI), and pneumonia (PNA). To assess how well the composite measure predicts future high and low performers, and explains variance in future hospital mortality. Research Design Using national Medicare data, we created a cohort of older patients treated at an acute care hospital for HF (n=1,203,595), AMI (n=625,595), or PNA (n=1,234,299). We ranked hospitals based on their July 2005 to June 2008 performance on the composite. We then estimated the odds of future (July to December 2009) 30-day, risk-adjusted mortality at the worst vs. best quintile of hospitals. We repeated this analysis using 2005-2008 performance on existing quality indicators, including mortality. Results The composite (vs. Hospital Compare) explained 68% (vs. 39%) of variation in future AMI mortality rates. In 2009, if an AMI patient had chosen a hospital in the worst vs. best quintile of performance using 2005-2008 composite (vs. Hospital Compare) rankings, he or she would have had 1.61 (vs. 1.39) times the odds of dying in 30 days (p-value for difference < 0.001). Results were similar for HF and PNA. Conclusions Composite measures of quality for HF, AMI, and PNA performed better than existing measures at explaining variation in future mortality and predicting future high and low performers. PMID:23942222

  1. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling results. In another aspect, using different assumed factors to convert measured organic carbon to organic aerosol concentrations cause substantial variations in the processed ambient data themselves, which are normally used as performance targets for model evaluations. The combination of uncertainties in the modeling results and in the moving performance targets causes major uncertainties in the final conclusion about the model performance. Without further information, the best thing that a modeler can do is to choose a combination of the assumed values from the sensible parameter ranges available in the literature, based on the best match of the modeling results with the processed measurement data. However, the best match of the modeling results with the processed measurement data may not necessarily guarantee that the model itself is rigorous and the model performance is robust. Conclusions on the model performance can only be reached with sufficient understanding of the uncertainties and their impact.

  2. Partial discharge measurements on 110kV current transformers. Setting the control value. Case study

    NASA Astrophysics Data System (ADS)

    Dan, C.; Morar, R.

    2017-05-01

    The case study presents a series of partial discharge measurements, reflecting the state of insulation of 110kV CURRENT TRANSFORMERS located in Sibiu county substations. Measurements were performed based on electrical method, using MPD600: an acquisition and analysis toolkit for detecting, recording, and analyzing partial discharges. MPD600 consists of one acquisition unit, an optical interface and a computer with dedicated software. The system allows measurements of partial discharge on site, even in presence of strong electromagnetic interferences because it provides synchronous acquisition from all measurement points. Therefore, measurements, with the ability to be calibrated, do render: - a value subject to interpretation according to IEC 61869-1:2007 + IEC 61869-2:2012 + IEC 61869-3:2011 + IEC 61869-5:2011 and IEC 60270: 2000; - the possibility to determine the quantitative limit of PD (a certain control value) to which the equipment can be operated safely and repaired with minimal costs (relative to the high costs implied by eliminating the consequences of a failure) identified empirically (process in which the instrument transformer subjected to the tests was completely destroyed).

  3. Preparation and value assignment of standard reference material 968e fat-soluble vitamins, carotenoids, and cholesterol in human serum.

    PubMed

    Thomas, Jeanice B; Duewer, David L; Mugenya, Isaac O; Phinney, Karen W; Sander, Lane C; Sharpless, Katherine E; Sniegoski, Lorna T; Tai, Susan S; Welch, Michael J; Yen, James H

    2012-01-01

    Standard Reference Material 968e Fat-Soluble Vitamins, Carotenoids, and Cholesterol in Human Serum provides certified values for total retinol, γ- and α-tocopherol, total lutein, total zeaxanthin, total β-cryptoxanthin, total β-carotene, 25-hydroxyvitamin D(3), and cholesterol. Reference and information values are also reported for nine additional compounds including total α-cryptoxanthin, trans- and total lycopene, total α-carotene, trans-β-carotene, and coenzyme Q(10). The certified values for the fat-soluble vitamins and carotenoids in SRM 968e were based on the agreement of results from the means of two liquid chromatographic methods used at the National Institute of Standards and Technology (NIST) and from the median of results of an interlaboratory comparison exercise among institutions that participate in the NIST Micronutrients Measurement Quality Assurance Program. The assigned values for cholesterol and 25-hydroxyvitamin D(3) in the SRM are the means of results obtained using the NIST reference method based upon gas chromatography-isotope dilution mass spectrometry and liquid chromatography-isotope dilution tandem mass spectrometry, respectively. SRM 968e is currently one of two available health-related NIST reference materials with concentration values assigned for selected fat-soluble vitamins, carotenoids, and cholesterol in human serum matrix. This SRM is used extensively by laboratories worldwide primarily to validate methods for determining these analytes in human serum and plasma and for assigning values to in-house control materials. The value assignment of the analytes in this SRM will help support measurement accuracy and traceability for laboratories performing health-related measurements in the clinical and nutritional communities.

  4. Joint Winter Runway Friction Program Accomplishments

    NASA Technical Reports Server (NTRS)

    Yager, Thomas J.; Wambold, James C.; Henry, John J.; Andresen, Arild; Bastian, Matthew

    2002-01-01

    The major program objectives are: (1) harmonize ground vehicle friction measurements to report consistent friction value or index for similar contaminated runway conditions, for example, compacted snow, and (2) establish reliable correlation between ground vehicle friction measurements and aircraft braking performance. Accomplishing these objectives would give airport operators better procedures for evaluating runway friction and maintaining acceptable operating conditions, providing pilots information to base go/no go decisions, and would contribute to reducing traction-related aircraft accidents.

  5. An Analytical Method for Measuring Competence in Project Management

    ERIC Educational Resources Information Center

    González-Marcos, Ana; Alba-Elías, Fernando; Ordieres-Meré, Joaquín

    2016-01-01

    The goal of this paper is to present a competence assessment method in project management that is based on participants' performance and value creation. It seeks to close an existing gap in competence assessment in higher education. The proposed method relies on information and communication technology (ICT) tools and combines Project Management…

  6. Laboratory Characterization of Solid Grade SW Brick

    DTIC Science & Technology

    2007-08-01

    Society for Testing and Materials (ASTM) D 2216 (ASTM 2002e). Based on the appropriate values of posttest water content, wet density, and an assumed...strain path (UX/SP) tests. In addition to the mechanical property tests, nondestructive pulse-velocity measurements were performed on each specimen...Figure 3. Spring-arm lateral deformeter mounted on test specimen

  7. Great Teaching

    ERIC Educational Resources Information Center

    Chetty, Raj; Friedman, John N.; Rockoff, Jonah E.

    2012-01-01

    In February 2012, the "New York Times" took the unusual step of publishing performance ratings for nearly 18,000 New York City teachers based on their students' test-score gains, commonly called value-added (VA) measures. This action, which followed a similar release of ratings in Los Angeles last year, drew new attention to the growing use of VA…

  8. 40 CFR 125.87 - As an owner or operator of a new facility, must I perform monitoring?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... biweekly during the primary period of reproduction, larval recruitment, and peak abundance identified... screen systems, you must monitor head loss across the screens and correlate the measured value with the... source water surface elevation (best professional judgment based on available hydrological data). The...

  9. The short physical performance battery as a predictor of functional capacity after stroke.

    PubMed

    Stookey, Alyssa D; Katzel, Leslie I; Steinbrenner, Gregory; Shaughnessy, Marianne; Ivey, Frederick M

    2014-01-01

    The short physical performance battery is a widely used instrument for quantifying lower extremity function in older adults. However, its utility for predicting endurance-based measures of functional performance that are more difficult to conduct in clinical settings is unknown. An understanding of this could be particularly relevant in mobility impaired stroke survivors, for whom establishing the predictive strength of simpler to perform measures would aid in tracking broader categories of functional disability. This cross-sectional study was conducted to determine whether the short physical performance battery is related to functional measures with a strong endurance component. Functional measures (short physical performance battery, peak aerobic capacity, and 6-minute walk) were obtained and compared for the first time in stroke survivors with hemiparetic gait. Pearson correlation coefficients were used to assess strength of the relationships (α P < .05). Forty-three stroke participants performed a standardized short physical performance battery. Forty-one of the subjects completed a 6-minute walk, and 40 completed a peak treadmill test. Mean short physical performance battery (6.3 ± 2.5 [mean ± SD]), 6-minute walk (242 ± 115 meters), and peak aerobic capacity (17.4 ± 5.4 mL/kg/min) indicated subjects had moderate to severely impaired lower extremity functional performance. The short physical performance battery was related to both 6-minute walk (r = 0.76; P < .0001) and peak fitness (r = 0.52; P < .001). Our results show that the short physical performance battery may be reflective of endurance-based, longer-distance performance measures that would be difficult to perform in standard clinical stroke settings. Additional studies are needed to explore the value of using the short physical performance battery to assess rehabilitation-related functional progression after stroke. Published by Elsevier Inc.

  10. Impact of Multileaf Collimator Configuration Parameters on the Dosimetric Accuracy of 6-MV Intensity-Modulated Radiation Therapy Treatment Plans.

    PubMed

    Petersen, Nick; Perrin, David; Newhauser, Wayne; Zhang, Rui

    2017-01-01

    The purpose of this study was to evaluate the impact of selected configuration parameters that govern multileaf collimator (MLC) transmission and rounded leaf offset in a commercial treatment planning system (TPS) (Pinnacle 3 , Philips Medical Systems, Andover, MA, USA) on the accuracy of intensity-modulated radiation therapy (IMRT) dose calculation. The MLC leaf transmission factor was modified based on measurements made with ionization chambers. The table of parameters containing rounded-leaf-end offset values was modified by measuring the radiation field edge as a function of leaf bank position with an ionization chamber in a scanning water-tank dosimetry system and comparing the locations to those predicted by the TPS. The modified parameter values were validated by performing IMRT quality assurance (QA) measurements on 19 gantry-static IMRT plans. Planar dose measurements were performed with radiographic film and a diode array (MapCHECK2) and compared to TPS calculated dose distributions using default and modified configuration parameters. Based on measurements, the leaf transmission factor was changed from a default value of 0.001 to 0.005. Surprisingly, this modification resulted in a small but statistically significant worsening of IMRT QA gamma-index passing rate, which revealed that the overall dosimetric accuracy of the TPS depends on multiple configuration parameters in a manner that is coupled and not intuitive because of the commissioning protocol used in our clinic. The rounded leaf offset table had little room for improvement, with the average difference between the default and modified offset values being -0.2 ± 0.7 mm. While our results depend on the current clinical protocols, treatment unit and TPS used, the methodology used in this study is generally applicable. Different clinics could potentially obtain different results and improve their dosimetric accuracy using our approach.

  11. Dinosaurs can fly -- High performance refining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or areamore » teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.« less

  12. Betavoltaic battery performance: Comparison of modeling and experiment.

    PubMed

    Svintsov, A A; Krasnov, A A; Polikarpov, M A; Polyakov, A Y; Yakimov, E B

    2018-07-01

    A verification of the Monte Carlo simulation software for the prediction of short circuit current value is carried out using the Ni-63 source with the activity of 2.7 mCi/cm 2 and converters based on Si p-i-n diodes and SiC and GaN Schottky diodes. A comparison of experimentally measured and calculated short circuit current values confirms the validity of the proposed modeling method, with the difference in the measured and calculated short circuit current values not exceeding 25% and the error in the predicted output power values being below 30%. Effects of the protective layer formed on the Ni-63 radioactive film and of the passivating film on the semiconductor converters on the energy deposited inside the converters are estimated. The maximum attainable betavoltaic cell parameters are estimated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. In vivo measurement of the longitudinal relaxation time of arterial blood (T1a) in the mouse using a pulsed arterial spin labeling approach.

    PubMed

    Thomas, David L; Lythgoe, Mark F; Gadian, David G; Ordidge, Roger J

    2006-04-01

    A novel method for measuring the longitudinal relaxation time of arterial blood (T1a) is presented. Knowledge of T1a is essential for accurately quantifying cerebral perfusion using arterial spin labeling (ASL) techniques. The method is based on the flow-sensitive alternating inversion recovery (FAIR) pulsed ASL (PASL) approach. We modified the standard FAIR acquisition scheme by incorporating a global saturation pulse at the beginning of the recovery period. With this approach the FAIR tissue signal difference has a simple monoexponential dependence on the recovery time, with T1a as the time constant. Therefore, FAIR measurements performed over a range of recovery times can be fitted to a monoexponential recovery curve and T1a can be calculated directly. This eliminates many of the difficulties associated with the measurement of T1a. Experiments performed in vivo in the mouse at 2.35T produced a mean value of 1.51 s for T1a, consistent with previously published values. (c) 2006 Wiley-Liss, Inc.

  14. The importance of centralities in dark network value chains

    NASA Astrophysics Data System (ADS)

    Toth, Noemi; Gulyás, László; Legendi, Richard O.; Duijn, Paul; Sloot, Peter M. A.; Kampis, George

    2013-09-01

    This paper introduces three novel centrality measures based on the nodes' role in the operation of a joint task, i.e., their position in a criminal network value chain. For this, we consider networks where nodes have attributes describing their "capabilities" or "colors", i.e., the possible roles they may play in a value chain. A value chain here is understood as a series of tasks to be performed in a specific order, each requiring a specific capability. The first centrality notion measures how many value chain instances a given node participates in. The other two assess the costs of replacing a node in the value chain in case the given node is no longer available to perform the task. The first of them considers the direct distance (shortest path length) between the node in question and its nearest replacement, while the second evaluates the actual replacement process, assuming that preceding and following nodes in the network should each be able to find and contact the replacement. In this report, we demonstrate the properties of the new centrality measures using a few toy examples and compare them to classic centralities, such as betweenness, closeness and degree centrality. We also apply the new measures to randomly colored empirical networks. We find that the newly introduced centralities differ sufficiently from the classic measures, pointing towards different aspects of the network. Our results also pinpoint the difference between having a replacement node in the network and being able to find one. This is the reason why "introduction distance" often has a noticeable correlation with betweenness. Our studies show that projecting value chains over networks may significantly alter the nodes' perceived importance. These insights might have important implications for the way law enforcement or intelligence agencies look at the effectiveness of dark network disruption strategies over time.

  15. Measures to assess the performance of an Australian non-government charitable non-acute health service: A Delphi Survey of Organisational Stakeholders.

    PubMed

    Colbran, Richard; Ramsden, Robyn; Stagnitti, Karen; Adams, Samantha

    2018-02-01

    Organisation performance measurement is relevant for non-profit charitable organisations as they strive for security in an increasingly competitive funding environment. This study aimed to identify the priority measures and indicators of organisational performance of an Australian non-government charitable organisation that delivers non-acute health services. Seventy-seven and 59 participants across nine stakeholder groups responded to a two-staged Delphi technique study of a case study organisation. The stage one questionnaire was developed using information garnered through a detailed review of literature. Data from the first round were aggregated and analysed for the stage two survey. The final data represented a group consensus. Quality of care was ranked the most important of six organisational performance measures. Service user satisfaction was ranked second followed by financial performance, internal processes, employee learning and growth and community engagement. Thirteen priority indicators were determined across the six measures. Consensus was reached on the priority organisational performance measures and indicators. Stakeholders of the case study organisation value evidence-based practice, technical strength of services and service user satisfaction over more commercially orientated indicators.

  16. Employers' use of value-based purchasing strategies.

    PubMed

    Rosenthal, Meredith B; Landon, Bruce E; Normand, Sharon-Lise T; Frank, Richard G; Ahmad, Thaniyyah S; Epstein, Arnold M

    2007-11-21

    Value-based purchasing by employers has often been portrayed as the lynchpin to quality improvement in a market-based health care system. Although a small group of the largest national employers has been actively engaged in promoting quality measurement, reporting, and pay for performance, it is unknown whether these ideas have significantly permeated employer-sponsored health benefit purchasing. To provide systematic descriptions and analyses of value-based purchasing and related efforts to improve quality of care by health care purchasers. We conducted telephone interviews with executives at 609 of the largest employers across 41 US markets between July 2005 and March 2006. The 41 randomly selected markets have at least 100,000 persons enrolled in health maintenance organizations, include approximately 91% of individuals enrolled in health maintenance organizations nationally, and represent roughly 78% of the US metropolitan population. Using the Dun & Bradstreet database of US employers, we identified the 26 largest firms in each market. Firms ranged in size from 60 to 250,000 employees. The degree to which value-based purchasing and related strategies are reported being used by employers. Percentages were weighted by number of employees. Of 1041 companies contacted, 609 employer representatives completed the survey (response rate, 64%). A large percentage of surveyed executives reported that they examine health plan quality data (269 respondents; 65% [95% confidence interval {CI}, 57%-74%]; P<.001), but few reported using it for performance rewards (49 respondents; 17% [95% CI, 7%-27%]; P=.008) or to influence employees (71 respondents; 23% [95% CI, 13%-33%]). Physician quality information is even less commonly examined (71 respondents; 16% [95% CI, 9%-23%]) or used by employers to reward performance (8 respondents; 2% [95% CI, 0%-3%]) or influence employee choice of providers (34 respondents; 8% [95% CI, 3%-12%]). Surveyed employers as a whole do not appear to be individually implementing incentives and programs in line with value-based purchasing ideals.

  17. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  18. Comparison of the Experimental Performance of Ferroelectric CPW Circuits with Method of Moment Simulations and Conformal Mapping

    NASA Technical Reports Server (NTRS)

    VanKeuls, Fred W.; Chevalier, Chris T.; Miranda, Felix A.; Carlson, C. M.; Rivkin, T. V.; Parilla, P. A.; Perkins, J. D.; Ginley, D. S.

    2001-01-01

    Experimental measurements of coplanar waveguide (CPW) circuits atop thin films of ferroelectric Ba(x)Sr(1-x)TiO3 (BST) were made as a function bias from 0 to 200 V and frequency from 0.045 to 20 GHz. The resulting phase shifts are compared with method of moments electromagnetic simulations and a conformal mapping analysis to determine the dielectric constant of the BST films. Based on the correlation between the experimental and the modeled data, an analysis of the extent to which the electromagnetic simulators provide reliable values for the dielectric constant of the ferroelectric in these structures has been performed. In addition, to determine how well the modeled data compare with experimental data, the dielectric constant values were also compared to low frequency measurements of interdigitated capacitor circuits on the same films. Results of these comparisons will be presented.

  19. Development of capacitive multiplexing circuit for SiPM-based time-of-flight (TOF) PET detector

    NASA Astrophysics Data System (ADS)

    Choe, Hyeok-Jun; Choi, Yong; Hu, Wei; Yan, Jianhua; Jung, Jin Ho

    2017-04-01

    There has been great interest in developing a time-of-flight (TOF) PET to improve the signal-to-noise ratio of PET image relative to that of non-TOF PET. Silicon photomultiplier (SiPM) arrays have attracted attention for use as a fast TOF PET photosensor. Since numerous SiPM arrays are needed to construct a modern human PET, a multiplexing method providing both good timing performance and high channel reduction capability is required to develop a SiPM-based TOF PET. The purpose of this study was to develop a capacitive multiplexing circuit for the SiPM-based TOF PET. The proposed multiplexing circuit was evaluated by measuring the coincidence resolving time (CRT) and the energy resolution as a function of the overvoltage using three different capacitor values of 15, 30, and 51 pF. A flood histogram was also obtained and quantitatively assessed. Experiments were performed using a 4× 4 array of 3× 3 mm2 SiPMs. Regarding the capacitor values, the multiplexing circuit using a smaller capacitor value showed the best timing performance. On the other hand, the energy resolution and flood histogram quality of the multiplexing circuit deteriorated as the capacitor value became smaller. The proposed circuit was able to achieve a CRT of 260+/- 4 ps FWHM and an energy resolution of 17.1 % with a pair of 2× 2× 20 mm3 LYSO crystals using a capacitor value of 30 pF at an overvoltage of 3.0 V. It was also possible to clearly resolve a 6× 6 array of LYSO crystals in the flood histogram using the multiplexing circuit. The experiment results indicate that the proposed capacitive multiplexing circuit is useful to obtain an excellent timing performance and a crystal-resolving capability in the flood histogram with a minimal degradation of the energy resolution, as well as to reduce the number of the readout channels of the SiPM-based TOF PET detector.

  20. Comparison of NASA-TLX scale, Modified Cooper-Harper scale and mean inter-beat interval as measures of pilot mental workload during simulated flight tasks.

    PubMed

    Mansikka, Heikki; Virtanen, Kai; Harris, Don

    2018-04-30

    The sensitivity of NASA-TLX scale, modified Cooper-Harper (MCH) scale and the mean inter-beat interval (IBI) of successive heart beats, as measures of pilot mental workload (MWL), were evaluated in a flight training device (FTD). Operational F/A-18C pilots flew instrument approaches with varying task loads. Pilots' performance, subjective MWL ratings and IBI were measured. Based on the pilots' performance, three performance categories were formed; high-, medium- and low-performance. Values of the subjective rating scales and IBI were compared between categories. It was found that all measures were able to differentiate most task conditions and there was a strong, positive correlation between NASA-TLX and MCH scale. An explicit link between IBI, NASA-TLX, MCH and performance was demonstrated. While NASA-TLX, MCH and IBI have all been previously used to measure MWL, this study is the first one to investigate their association in a modern FTD, using a realistic flying mission and operational pilots.

  1. Evaluation of 16 measures of mental workload using a simulated flight task emphasizing mediational activity

    NASA Technical Reports Server (NTRS)

    Wierwille, W. W.; Rahimi, M.; Casali, J. G.

    1985-01-01

    As aircraft and other systems become more automated, a shift is occurring in human operator participation in these systems. This shift is away from manual control and toward activities that tap the higher mental functioning of human operators. Therefore, an experiment was performed in a moving-base flight simulator to assess mediational (cognitive) workload measurement. Specifically, 16 workload estimation techniques were evaluated as to their sensitivity and intrusion in a flight task emphasizing mediational behavior. Task loading, using navigation problems presented on a display, was treated as an independent variable, and workload-measure values were treated as dependent variables. Results indicate that two mediational task measures, two rating scale measures, time estimation, and two eye behavior measures were reliably sensitive to mediational loading. The time estimation measure did, however, intrude on mediational task performance. Several of the remaining measures were completely insensitive to mediational load.

  2. Coefficient of Variance as Quality Criterion for Evaluation of Advanced Hepatic Fibrosis Using 2D Shear-Wave Elastography.

    PubMed

    Lim, Sanghyeok; Kim, Seung Hyun; Kim, Yongsoo; Cho, Young Seo; Kim, Tae Yeob; Jeong, Woo Kyoung; Sohn, Joo Hyun

    2018-02-01

    To compare the diagnostic performance for advanced hepatic fibrosis measured by 2D shear-wave elastography (SWE), using either the coefficient of variance (CV) or the interquartile range divided by the median value (IQR/M) as quality criteria. In this retrospective study, from January 2011 to December 2013, 96 patients, who underwent both liver stiffness measurement by 2D SWE and liver biopsy for hepatic fibrosis grading, were enrolled. The diagnostic performances of the CV and the IQR/M were analyzed using receiver operating characteristic curves with areas under the curves (AUCs) and were compared by Fisher's Z test, based on matching the cutoff points in an interactive dot diagram. All P values less than 0.05 were considered significant. When using the cutoff value IQR/M of 0.21, the matched cutoff point of CV was 20%. When a cutoff value of CV of 20% was used, the diagnostic performance for advanced hepatic fibrosis ( ≥ F3 grade) with CV of less than 20% was better than that in the group with CV greater than or equal to 20% (AUC 0.967 versus 0.786, z statistic = 2.23, P = .025), whereas when the matched cutoff value IQR/M of 0.21 showed no difference (AUC 0.918 versus 0.927, z statistic = -0.178, P = .859). The validity of liver stiffness measurements made by 2D SWE for assessing advanced hepatic fibrosis may be judged using CVs, and when the CV is less than 20% it can be considered "more reliable" than using IQR/M of less than 0.21. © 2017 by the American Institute of Ultrasound in Medicine.

  3. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  4. Use of Low-Value Pediatric Services Among the Commercially Insured

    PubMed Central

    Schwartz, Aaron L.; Volerman, Anna; Conti, Rena M.; Huang, Elbert S.

    2016-01-01

    BACKGROUND: Claims-based measures of “low-value” pediatric services could facilitate the implementation of interventions to reduce the provision of potentially harmful services to children. However, few such measures have been developed. METHODS: We developed claims-based measures of 20 services that typically do not improve child health according to evidence-based guidelines (eg, cough and cold medicines). Using these measures and claims from 4.4 million commercially insured US children in the 2014 Truven MarketScan Commercial Claims and Encounters database, we calculated the proportion of children who received at least 1 low-value pediatric service during the year, as well as total and out-of-pocket spending on these services. We report estimates based on "narrow" measures designed to only capture instances of service use that were low-value. To assess the sensitivity of results to measure specification, we also reported estimates based on "broad measures" designed to capture most instances of service use that were low-value. RESULTS: According to the narrow measures, 9.6% of children in our sample received at least 1 of the 20 low-value services during the year, resulting in $27.0 million in spending, of which $9.2 million was paid out-of-pocket (33.9%). According to the broad measures, 14.0% of children in our sample received at least 1 of the 20 low-value services during the year. CONCLUSIONS: According to a novel set of claims-based measures, at least 1 in 10 children in our sample received low-value pediatric services during 2014. Estimates of low-value pediatric service use may vary substantially with measure specification. PMID:27940698

  5. Reproducibility, Reliability, and Validity of Fuchsin-Based Beads for the Evaluation of Masticatory Performance.

    PubMed

    Sánchez-Ayala, Alfonso; Farias-Neto, Arcelino; Vilanova, Larissa Soares Reis; Costa, Marina Abrantes; Paiva, Ana Clara Soares; Carreiro, Adriana da Fonte Porto; Mestriner-Junior, Wilson

    2016-08-01

    Rehabilitation of masticatory function is inherent to prosthodontics; however, despite the various techniques for evaluating oral comminution, the methodological suitability of these has not been completely studied. The aim of this study was to determine the reproducibility, reliability, and validity of a test food based on fuchsin beads for masticatory function assessment. Masticatory performance was evaluated in 20 dentate subjects (mean age, 23.3 years) using two kinds of test foods and methods: fuchsin beads and ultraviolet-visible spectrophotometry, and silicone cubes and multiple sieving as gold standard. Three examiners conducted five masticatory performance trials with each test food. Reproducibility of the results from both test foods was separately assessed using the intraclass correlation coefficient (ICC). Reliability and validity of fuchsin bead data were measured by comparing the average mean of absolute differences and the measurement means, respectively, regarding silicone cube data using the paired Student's t-test (α = 0.05). Intraexaminer and interexaminer ICC for the fuchsin bead values were 0.65 and 0.76 (p < 0.001), respectively; those for the silicone cubes values were 0.93 and 0.91 (p < 0.001), respectively. Reliability revealed intraexaminer (p < 0.001) and interexaminer (p < 0.05) differences between the average means of absolute differences of each test foods. Validity also showed differences between the measurement means of each test food (p < 0.001). Intra- and interexaminer reproducibility of the test food based on fuchsin beads for evaluation of masticatory performance were good and excellent, respectively; however, the reliability and validity were low, because fuchsin beads do not measure the grinding capacity of masticatory function as silicone cubes do; instead, this test food describes the crushing potential of teeth. Thus, the two kinds of test foods evaluate different properties of masticatory capacity, confirming fushsin beads as a useful tool for this purpose. © 2015 by the American College of Prosthodontists.

  6. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...

  7. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an...

  8. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moura, Eduardo S., E-mail: emoura@wisc.edu; Micka, John A.; Hammer, Cliff G.

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. Tomore » compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The TPS relative differences with the Acuros™ algorithm were similar in both experimental and simulated setups. The discrepancy between the BrachyVision™, Acuros™, and TG-43 dose responses in the phantom described by this work exceeded 12% for certain setups. Conclusions: The results derived from the phantom measurements show good agreement with the simulations and TPS calculations, using Acuros™ algorithm. Differences in the dose responses were evident in the experimental results when heterogeneous materials were introduced. These measurements prove the usefulness of the heterogeneous phantom for verification of HDR treatment planning systems based on model-based dose calculation algorithms.« less

  10. Ecosystem-based management and the wealth of ecosystems.

    PubMed

    Yun, Seong Do; Hutniczak, Barbara; Abbott, Joshua K; Fenichel, Eli P

    2017-06-20

    We merge inclusive wealth theory with ecosystem-based management (EBM) to address two challenges in the science of sustainable management of ecosystems. First, we generalize natural capital theory to approximate realized shadow prices for multiple interacting natural capital stocks (species) making up an ecosystem. These prices enable ecosystem components to be better included in wealth-based sustainability measures. We show that ecosystems are best envisioned as portfolios of assets, where the portfolio's performance depends on the performance of the underlying assets influenced by their interactions. Second, changes in ecosystem wealth provide an attractive headline index for EBM, regardless of whether ecosystem wealth is ultimately included in a broader wealth index. We apply our approach to the Baltic Sea ecosystem, focusing on the interacting community of three commercially important fish species: cod, herring, and sprat. Our results incorporate supporting services embodied in the shadow price of a species through its trophic interactions. Prey fish have greater shadow prices than expected based on market value, and predatory fish have lower shadow prices than expected based on market value. These results are because correctly measured shadow prices reflect interdependence and limits to substitution. We project that ecosystem wealth in the Baltic Sea fishery ecosystem generally increases conditional on the EBM-inspired multispecies maximum sustainable yield management beginning in 2017, whereas continuing the current single-species management generally results in declining wealth.

  11. Ecosystem-based management and the wealth of ecosystems

    PubMed Central

    Yun, Seong Do; Hutniczak, Barbara; Abbott, Joshua K.; Fenichel, Eli P.

    2017-01-01

    We merge inclusive wealth theory with ecosystem-based management (EBM) to address two challenges in the science of sustainable management of ecosystems. First, we generalize natural capital theory to approximate realized shadow prices for multiple interacting natural capital stocks (species) making up an ecosystem. These prices enable ecosystem components to be better included in wealth-based sustainability measures. We show that ecosystems are best envisioned as portfolios of assets, where the portfolio’s performance depends on the performance of the underlying assets influenced by their interactions. Second, changes in ecosystem wealth provide an attractive headline index for EBM, regardless of whether ecosystem wealth is ultimately included in a broader wealth index. We apply our approach to the Baltic Sea ecosystem, focusing on the interacting community of three commercially important fish species: cod, herring, and sprat. Our results incorporate supporting services embodied in the shadow price of a species through its trophic interactions. Prey fish have greater shadow prices than expected based on market value, and predatory fish have lower shadow prices than expected based on market value. These results are because correctly measured shadow prices reflect interdependence and limits to substitution. We project that ecosystem wealth in the Baltic Sea fishery ecosystem generally increases conditional on the EBM-inspired multispecies maximum sustainable yield management beginning in 2017, whereas continuing the current single-species management generally results in declining wealth. PMID:28588145

  12. A real-time measurement system for parameters of live biology metabolism process with fiber optics

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Zhao, Hui; Liu, Zemin; Cheng, Jinke; Cai, Rong

    2010-08-01

    Energy metabolism is one of the basic life activities of cellular in which lactate, O2 and CO2 will be released into the extracellular environment. By monitoring the quantity of these parameters, the mitochondrial performance will be got. A continuous measurement system for the concentration of O2, CO2 and PH value is introduced in this paper. The system is made up of several small-sized fiber optics biosensors corresponding to the container. The setup of the system and the principle of measurement of several parameters are explained. The setup of the fiber PH sensor based on principle of light absorption is also introduced in detail and some experimental results are given. From the results we can see that the system can measure the PH value precisely suitable for cell cultivation. The linear and repeatable accuracies are 3.6% and 6.7% respectively, which can fulfill the measurement task.

  13. Paving the way for the use of the SDQ in economic evaluations of school-based population health interventions: an empirical analysis of the external validity of SDQ mapping algorithms to the CHU9D in an educational setting.

    PubMed

    Boyer, Nicole R S; Miller, Sarah; Connolly, Paul; McIntosh, Emma

    2016-04-01

    The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions. Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms. Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm. Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.

  14. Rotor design for maneuver performance

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Schrage, Daniel

    1986-01-01

    A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.

  15. High-rate x-ray spectroscopy in mammography with a CdTe detector: a digital pulse processing approach.

    PubMed

    Abbene, L; Gerardi, G; Principato, F; Del Sordo, S; Ienzi, R; Raso, G

    2010-12-01

    Direct measurement of mammographic x-ray spectra under clinical conditions is a difficult task due to the high fluence rate of the x-ray beams as well as the limits in the development of high resolution detection systems in a high counting rate environment. In this work we present a detection system, based on a CdTe detector and an innovative digital pulse processing (DPP) system, for high-rate x-ray spectroscopy in mammography. The DPP system performs a digital pile-up inspection and a digital pulse height analysis of the detector signals, digitized through a 14-bit, 100 MHz digitizer, for x-ray spectroscopy even at high photon counting rates. We investigated on the response of the digital detection system both at low (150 cps) and at high photon counting rates (up to 500 kcps) by using monoenergetic x-ray sources and a nonclinical molybdenum anode x-ray tube. Clinical molybdenum x-ray spectrum measurements were also performed by using a pinhole collimator and a custom alignment device. The detection system shows excellent performance up to 512 kcps with an energy resolution of 4.08% FWHM at 22.1 keV. Despite the high photon counting rate (up to 453 kcps), the molybdenum x-ray spectra, measured under clinical conditions, are characterized by a low number of pile-up events. The agreement between the attenuation curves and the half value layer values, obtained from the measured spectra, simulated spectra, and from the exposure values directly measured with an ionization chamber, also shows the accuracy of the measurements. These results make the proposed detection system a very attractive tool for both laboratory research and advanced quality controls in mammography.

  16. Application of online measures to monitor and evaluate multiplatform fusion performance

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kowalski, Charlene; Klamer, Dale M.

    1999-07-01

    A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.

  17. Tolerancing aspheres based on manufacturing knowledge

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Kokot, S.; Fuchs, U.

    2017-10-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  18. Tolerancing aspheres based on manufacturing statistics

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Möhl, A.; Fuchs, U.

    2017-11-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  19. Value redefined for inflammatory bowel disease patients: a choice-based conjoint analysis of patients' preferences.

    PubMed

    van Deen, Welmoed K; Nguyen, Dominic; Duran, Natalie E; Kane, Ellen; van Oijen, Martijn G H; Hommes, Daniel W

    2017-02-01

    Value-based healthcare is an upcoming field. The core idea is to evaluate care based on achieved outcomes divided by the costs. Unfortunately, the optimal way to evaluate outcomes is ill-defined. In this study, we aim to develop a single, preference based, outcome metric, which can be used to quantify overall health value in inflammatory bowel disease (IBD). IBD patients filled out a choice-based conjoint (CBC) questionnaire in which patients chose preferable outcome scenarios with different levels of disease control (DC), quality of life (QoL), and productivity (Pr). A CBC analysis was performed to estimate the relative value of DC, QoL, and Pr. A patient-centered composite score was developed which was weighted based on the stated preferences. We included 210 IBD patients. Large differences in stated preferences were observed. Increases from low to intermediate outcome levels were valued more than increases from intermediate to high outcome levels. Overall, QoL was more important to patients than DC or Pr. Individual outcome scores were calculated based on the stated preferences. This score was significantly different from a score not weighted based on patient preferences in patients with active disease. We showed the feasibility of creating a single outcome metric in IBD which incorporates patients' values using a CBC. Because this metric changes significantly when weighted according to patients' values, we propose that success in healthcare should be measured accordingly.

  20. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles

    PubMed Central

    Hsu, Bing-Cheng

    2018-01-01

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940

  1. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    PubMed

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  2. Statistical analysis of electromagnetic radiation measurements in the vicinity of GSM/UMTS base station antenna masts.

    PubMed

    Koprivica, Mladen; Neskovic, Natasa; Neskovic, Aleksandar; Paunovic, George

    2014-01-01

    As a result of dense installations of public mobile base station, additional electromagnetic radiation occurs in the living environment. In order to determine the level of radio-frequency radiation generated by base stations, extensive electromagnetic field strength measurements were carried out for 664 base station locations. Base station locations were classified into three categories: indoor, masts and locations with installations on buildings. Having in mind the large percentage (47 %) of sites with antenna masts, a detailed analysis of this location category was performed, and the measurement results were presented. It was concluded that the total electric field strength in the vicinity of base station antenna masts in no case exceeded 10 V m(-1), which is quite below the International Commission on Non-Ionizing Radiation Protection reference levels. At horizontal distances >50 m from the mast bottom, the median and maximum values were <1 and 2 V m(-1), respectively.

  3. PIN architecture for ultrasensitive organic thin film photoconductors.

    PubMed

    Jin, Zhiwen; Wang, Jizheng

    2014-06-17

    Organic thin film photoconductors (OTFPs) are expected to have wide applications in the field of optical communications, artificial vision and biomedical sensing due to their great advantages of high flexibility and low-cost large-area fabrication. However, their performances are not satisfactory at present: the value of responsivity (R), the parameter that measures the sensitivity of a photoconductor to light, is below 1 AW(-1). We believe such poor performance is resulted from an intrinsic self-limited effect of present bare blend based device structure. Here we designed a PIN architecture for OTFPs, the PIN device exhibits a significantly improved high R value of 96.5 AW(-1). The PIN architecture and the performance the PIN device shows here should represent an important step in the development of OTFPs.

  4. PIN architecture for ultrasensitive organic thin film photoconductors

    PubMed Central

    Jin, Zhiwen; Wang, Jizheng

    2014-01-01

    Organic thin film photoconductors (OTFPs) are expected to have wide applications in the field of optical communications, artificial vision and biomedical sensing due to their great advantages of high flexibility and low-cost large-area fabrication. However, their performances are not satisfactory at present: the value of responsivity (R), the parameter that measures the sensitivity of a photoconductor to light, is below 1 AW−1. We believe such poor performance is resulted from an intrinsic self-limited effect of present bare blend based device structure. Here we designed a PIN architecture for OTFPs, the PIN device exhibits a significantly improved high R value of 96.5 AW−1. The PIN architecture and the performance the PIN device shows here should represent an important step in the development of OTFPs. PMID:24936952

  5. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  6. Timing performance of a self-cancelling turn-signal mechanism in motorcycles based on the ATMega328P microcontroller

    NASA Astrophysics Data System (ADS)

    Nurbuwat, Adzin Kondo; Eryandi, Kholid Yusuf; Estriyanto, Yuyun; Widiastuti, Indah; Pambudi, Nugroho Agung

    2018-02-01

    The objective of this study is to measure the time performance of a self-cancelling turn signal mechanism based on the In this study the performance of self-cancelling turn signal based on ATMega328P microcontroller is measured at low speed and high speed treatment on motorcycles commonly used in Indonesia. Time performance measurements were made by comparing the self-cancelling turn signal based on ATMega328P microcontroller with standard motor turn time. Measurements of time at low speed treatment were performed at a speed range of 15 km / h, 20 km / h, 25 km / h on the U-turn test trajectory. The angle of the turning angle of the potentiometer is determined at 3°. The limit of steering wheel turning angle at the potentiometer is set at 3°. For high-speed treatment is 30 km / h, 40 km / h, 50km / h, and 60 km / h, on the L-turn test track with a tilt angle (roll angle) read by the L3G4200D gyroscope sensor. Each speed test is repeated 3 replications. Standard time is a reference for self-cancelling turn signal performance. The standard time obtained is 15.68 s, 11.96 s, 9.34 s at low speed and 4.63 s, 4.06 s, 3.61 s, 3.13 s at high speed. The time test of self-cancelling turn signal shows 16.10 s, 12.42 s, 10.24 s at the low speed and 5.18, 4.51, 3.73, 3.21 at the high speed. At a speed of 15 km / h occurs the instability of motion turns motorcycle so that testing is more difficult. Small time deviations indicate the tool works well. The largest time deviation value is 0.9 seconds at low speed and 0.55 seconds at high speed. The conclusion at low velocity of the highest deviation value occurred at the speed of 25 km / h test due to the movement of slope with inclination has started to happen which resulted in slow reading of steering movement. At higher speeds the time slows down due to rapid sensor readings on the tilt when turning fast at ever higher speeds. The timing performance of self-cancelling turn signal decreases as the motorcycle turning characteristics move from the turn using the steering angle to using a tilt angle based on speed, or vice versa.

  7. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  8. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  9. An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines

    NASA Technical Reports Server (NTRS)

    Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng

    2014-01-01

    We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.

  10. The effect of dental scaling noise during intravenous sedation on acoustic respiration rate (RRa™).

    PubMed

    Kim, Jung Ho; Chi, Seong In; Kim, Hyun Jeong; Seo, Kwang-Suk

    2018-04-01

    Respiration monitoring is necessary during sedation for dental treatment. Recently, acoustic respiration rate (RRa™), an acoustics-based respiration monitoring method, has been used in addition to auscultation or capnography. The accuracy of this method may be compromised in an environment with excessive noise. This study evaluated whether noise from the ultrasonic scaler affects the performance of RRa in respiratory rate measurement. We analyzed data from 49 volunteers who underwent scaling under intravenous sedation. Clinical tests were divided into preparation, sedation, and scaling periods; respiratory rate was measured at 2-s intervals for 3 min in each period. Missing values ratios of the RRa during each period were measuerd; correlation analysis and Bland-Altman analysis were performed on respiratory rates measured by RRa and capnogram. Respective missing values ratio from RRa were 5.62%, 8.03%, and 23.95% in the preparation, sedation, and scaling periods, indicating an increased missing values ratio in the scaling period (P < 0.001). Correlation coefficients of the respiratory rate, measured with two different methods, were 0.692, 0.677, and 0.562 in each respective period. Mean capnography-RRa biases in Bland-Altman analyses were -0.03, -0.27, and -0.61 in each respective period (P < 0.001); limits of agreement were -4.84-4.45, -4.89-4.15, and -6.18-4.95 (P < 0.001). The probability of missing respiratory rate values was higher during scaling when RRa was used for measurement. Therefore, the use of RRa alone for respiration monitoring during ultrasonic scaling may not be safe.

  11. Performance of the libraries in Isfahan University of Medical Sciences based on the EFQM model.

    PubMed

    Karimi, Saeid; Atashpour, Bahareh; Papi, Ahmad; Nouri, Rasul; Hasanzade, Akbar

    2014-01-01

    Performance measurement is inevitable for university libraries. Hence, planning and establishing a constant and up-to-date measurement system is required for the libraries, especially the university libraries. The primary studies and analyses reveal that the EFQM Excellence Model has been efficient, and the administrative reform program has focused on the implementation of this model. Therefore, on the basis of these facts as well as the need for a measurement system, the researchers measured the performance of libraries in schools and hospitals supported by Isfahan University of Medical Sciences, using the EFQM Organizational Excellence Model. This descriptive research study was carried out by a cross-sectional survey method in 2011. This research study included librarians and library directors of Isfahan University of Medical Sciences (70 people). The validity of the instrument was measured by the specialists in the field of Management and Library Science. To measure the reliability of the questionnaire, the Cronbach's alpha coefficient value was measured (0.93). The t-test, ANOVA, and Spearman's rank correlation coefficient were used for measurements. The data were analyzed by SPSS. Data analysis revealed that the mean score of the performance measurement for the libraries under study and between nine dimensions the highest score was 65.3% for leadership dimension and the lowest scores were 55.1% for people and 55.1% for society results. In general, using the ninth EFQM model the average level of all dimensions, which is in good agreement with normal values, was assessed. However, compared to other results, the criterion people and society results were poor. It is Recommended by forming the expert committee on criterion people and society results by individuals concerned with the various conferences and training courses to improve the aspects.

  12. Performance of the libraries in Isfahan University of Medical Sciences based on the EFQM model

    PubMed Central

    Karimi, Saeid; Atashpour, Bahareh; Papi, Ahmad; Nouri, Rasul; Hasanzade, Akbar

    2014-01-01

    Introduction: Performance measurement is inevitable for university libraries. Hence, planning and establishing a constant and up-to-date measurement system is required for the libraries, especially the university libraries. The primary studies and analyses reveal that the EFQM Excellence Model has been efficient, and the administrative reform program has focused on the implementation of this model. Therefore, on the basis of these facts as well as the need for a measurement system, the researchers measured the performance of libraries in schools and hospitals supported by Isfahan University of Medical Sciences, using the EFQM Organizational Excellence Model. Materials and Methods: This descriptive research study was carried out by a cross-sectional survey method in 2011. This research study included librarians and library directors of Isfahan University of Medical Sciences (70 people). The validity of the instrument was measured by the specialists in the field of Management and Library Science. To measure the reliability of the questionnaire, the Cronbach's alpha coefficient value was measured (0.93). The t-test, ANOVA, and Spearman's rank correlation coefficient were used for measurements. The data were analyzed by SPSS. Results: Data analysis revealed that the mean score of the performance measurement for the libraries under study and between nine dimensions the highest score was 65.3% for leadership dimension and the lowest scores were 55.1% for people and 55.1% for society results. Conclusion: In general, using the ninth EFQM model the average level of all dimensions, which is in good agreement with normal values, was assessed. However, compared to other results, the criterion people and society results were poor. It is Recommended by forming the expert committee on criterion people and society results by individuals concerned with the various conferences and training courses to improve the aspects. PMID:25540795

  13. Pulse oximeter based mobile biotelemetry application.

    PubMed

    Işik, Ali Hakan; Güler, Inan

    2012-01-01

    Quality and features of tele-homecare are improved by information and communication technologies. In this context, a pulse oximeter-based mobile biotelemetry application is developed. With this application, patients can measure own oxygen saturation and heart rate through Bluetooth pulse oximeter at home. Bluetooth virtual serial port protocol is used to send the test results from pulse oximeter to the smart phone. These data are converted into XML type and transmitted to remote web server database via smart phone. In transmission of data, GPRS, WLAN or 3G can be used. The rule based algorithm is used in the decision making process. By default, the threshold value of oxygen saturation is 80; the heart rate threshold values are 40 and 150 respectively. If the patient's heart rate is out of the threshold values or the oxygen saturation is below the threshold value, an emergency SMS is sent to the doctor. By this way, the directing of an ambulance to the patient can be performed by doctor. The doctor for different patients can change these threshold values. The conversion of the result of the evaluated data to SMS XML template is done on the web server. Another important component of the application is web-based monitoring of pulse oximeter data. The web page provides access to of all patient data, so the doctors can follow their patients and send e-mail related to the evaluation of the disease. In addition, patients can follow own data on this page. Eight patients have become part of the procedure. It is believed that developed application will facilitate pulse oximeter-based measurement from anywhere and at anytime.

  14. ARC-VM: An architecture real options complexity-based valuation methodology for military systems-of-systems acquisitions

    NASA Astrophysics Data System (ADS)

    Domercant, Jean Charles

    The combination of today's national security environment and mandated acquisition policies makes it necessary for military systems to interoperate with each other to greater degrees. This growing interdependency results in complex Systems-of-Systems (SoS) that only continue to grow in complexity to meet evolving capability needs. Thus, timely and affordable acquisition becomes more difficult, especially in the face of mounting budgetary pressures. To counter this, architecting principles must be applied to SoS design. The research objective is to develop an Architecture Real Options Complexity-Based Valuation Methodology (ARC-VM) suitable for acquisition-level decision making, where there is a stated desire for more informed tradeoffs between cost, schedule, and performance during the early phases of design. First, a framework is introduced to measure architecture complexity as it directly relates to military SoS. Development of the framework draws upon a diverse set of disciplines, including Complexity Science, software architecting, measurement theory, and utility theory. Next, a Real Options based valuation strategy is developed using techniques established for financial stock options that have recently been adapted for use in business and engineering decisions. The derived complexity measure provides architects with an objective measure of complexity that focuses on relevant complex system attributes. These attributes are related to the organization and distribution of SoS functionality and the sharing and processing of resources. The use of Real Options provides the necessary conceptual and visual framework to quantifiably and traceably combine measured architecture complexity, time-valued performance levels, as well as programmatic risks and uncertainties. An example suppression of enemy air defenses (SEAD) capability demonstrates the development and usefulness of the resulting architecture complexity & Real Options based valuation methodology. Different portfolios of candidate system types are used to generate an array of architecture alternatives that are then evaluated using an engagement model. This performance data is combined with both measured architecture complexity and programmatic data to assign an acquisition value to each alternative. This proves useful when selecting alternatives most likely to meet current and future capability needs.

  15. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  16. A balanced scorecard approach in assessing IT value in healthcare sector: an empirical examination.

    PubMed

    Wu, Ing-Long; Kuo, Yi-Zu

    2012-12-01

    Healthcare sector indicates human-based and knowledge-intensive property. Massive IT investments are necessary to maintain competitiveness in this sector. The justification of IT investments is the major concern of senior management. Empirical studies examining IT value have found inconclusive results with little or no improvement in productivity. Little research has been conducted in healthcare sector. The balanced scorecard (BSC) strikes a balance between financial and non-financial measure and has been applied in evaluating organization-based performance. Moreover, healthcare organizations often consider their performance goal at customer satisfaction in addition to financial performance. This research thus proposed a new hierarchical structure for the BSC with placing both finance and customer at the top, internal process at the next, and learning and growth at the bottom. Empirical examination has found the importance of the new BSC structure in assessing IT investments. Learning and growth plays the initial driver for reaching both customer and financial performance through the mediator of internal process. This can provide deep insight into effectively managing IT resources in the hospitals.

  17. Autonomy to health care professionals as a vehicle for value-based health care? Results of a quasi-experiment in hospital governance.

    PubMed

    Larsen, Kristian Nørgaard; Kristensen, Søren Rud; Søgaard, Rikke

    2018-01-01

    Health care systems increasingly aim to create value for money by simultaneous incentivizing of quality along with classical goals such as activity increase and cost containment. It has recently been suggested that letting health care professionals choose the performance metrics on which they are evaluated may improve value of care by facilitating greater employee initiative, especially in the quality domain. There is a risk that this strategy leads to loss of performance as measured by the classical goals, if these goals are not prioritized by health care professionals. In this study we investigate the performance of eight hospital departments in the second largest region of Denmark that were delegated the authority to choose their own performance focus during a three-year test period from 2013 to 2016. The usual activity-based remuneration was suspended and departments were instructed to keep their global budgets and maintain activity levels, while managing according to their newly chosen performance focuses. Our analysis is based on monthly observations from two years before to three years after delegation. We collected data for 32 new performance indicators chosen by hospital department managements; 11 new performance indicators chosen by a centre management under which 5 of the departments were organised; and 3 classical indicators of priority to the central administration (activity, productivity, and cost containment). Interrupted time series analysis is used to estimate the effect of delegation on these indicators. We find no evidence that this particular proposal for giving health care professionals greater autonomy leads to consistent quality improvements but, on the other hand, also no consistent evidence of harm to the classical goals. Future studies could consider alternative possibilities to create greater autonomy for hospital departments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. NCTR using a polarization-agile coherent radar system

    NASA Astrophysics Data System (ADS)

    Walton, E. K.; Moffatt, D. L.; Garber, F. D.; Kamis, A.; Lai, C. Y.

    1986-01-01

    This report describes the results of the first year of a research project performed by the Ohio State University ElectroScience Laboratory (OSU/ESL) for the Naval Weapons Center (NWC). The goal of this project is to explore the use of the polarization properties of the signal scattered from a radar target for the purpose of radar target identification. Various radar target identification algorithms were applied to the case of a full polarization coherent radar system, and were tested using a specific data base and noise model. The data base used to test the performance of the radar target identification algorithms developed here is a unique set of measurements made on scale models of aircraft. Measurements were made using the OSU/ESL Compact Radar Measurement Range. The range was operated in a broad-band (1-12 GHZ) mode and the full polarization matrix was measured. Calibrated values (amplitude and phase) of the RCS for the three polarization states were thus available. The polarization states are listed below.

  19. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  20. Conception, fabrication and characterization of a silicon based MEMS inertial switch with a threshold value of 5 g

    NASA Astrophysics Data System (ADS)

    Zhang, Fengtian; Wang, Chao; Yuan, Mingquan; Tang, Bin; Xiong, Zhuang

    2017-12-01

    Most of the MEMS inertial switches developed in recent years are intended for shock and impact sensing with a threshold value above 50 g. In order to follow the requirement of detecting linear acceleration signal at low-g level, a silicon based MEMS inertial switch with a threshold value of 5 g was designed, fabricated and characterized. The switch consisted of a large proof mass, supported by circular spiral springs. An analytical model of the structure stiffness of the proposed switch was derived and verified by finite-element simulation. The structure fabrication was based on a customized double-buried layer silicon-on-insulator wafer and encapsulated by glass wafers. The centrifugal experiment and nanoindentation experiment were performed to measure the threshold value as well as the structure stiffness. The actual threshold values were measured to be 0.1-0.3 g lower than the pre-designed value of 5 g due to the dimension loss during non-contact lithography processing. Concerning the reliability assessment, a series of environmental experiments were conducted and the switches remained operational without excessive errors. However, both the random vibration and the shock tests indicate that the metal particles generated during collision of contact parts might affect the contact reliability and long-time stability. According to the conclusion reached in this report, an attentive study on switch contact behavior should be included in future research.

  1. Comparison of bedside screening methods for frailty assessment in older adult trauma patients in the emergency department.

    PubMed

    Shah, Sachita P; Penn, Kevin; Kaplan, Stephen J; Vrablik, Michael; Jablonowski, Karl; Pham, Tam N; Reed, May J

    2018-04-14

    Frailty is linked to poor outcomes in older patients. We prospectively compared the utility of the picture-based Clinical Frailty Scale (CFS9), clinical assessments, and ultrasound muscle measurements against the reference FRAIL scale in older adult trauma patients in the emergency department (ED). We recruited a convenience sample of adults 65 yrs. or older with blunt trauma and injury severity scores <9. We queried subjects (or surrogates) on the FRAIL scale, and compared this to: physician-based and subject/surrogate-based CFS9; mid-upper arm circumference (MUAC) and grip strength; and ultrasound (US) measures of muscle thickness (limbs and abdominal wall). We derived optimal diagnostic thresholds and calculated performance metrics for each comparison using sensitivity, specificity, predictive values, and area under receiver operating characteristic curves (AUROC). Fifteen of 65 patients were frail by FRAIL scale (23%). CFS9 performed well when assessed by subject/surrogate (AUROC 0.91 [95% CI 0.84-0.98] or physician (AUROC 0.77 [95% CI 0.63-0.91]. Optimal thresholds for both physician and subject/surrogate were CFS9 of 4 or greater. If both physician and subject/surrogate provided scores <4, sensitivity and negative predictive value were 90.0% (54.1-99.5%) and 95.0% (73.1-99.7%). Grip strength and MUAC were not predictors. US measures that combined biceps and quadriceps thickness showed an AUROC of 0.75 compared to the reference standard. The ED needs rapid, validated tools to screen for frailty. The CFS9 has excellent negative predictive value in ruling out frailty. Ultrasound of combined biceps and quadriceps has modest concordance as an alternative in trauma patients who cannot provide a history. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Real-time measurements of the concentration and isotope composition of atmospheric and volcanic CO2 at Mount Etna (Italy)

    NASA Astrophysics Data System (ADS)

    Rizzo, Andrea Luca; Jost, Hans-Jürg; Caracausi, Antonio; Paonita, Antonio; Liotta, Marcello; Martelli, Mauro

    2014-04-01

    We present unprecedented data of real-time measurements of the concentration and isotope composition of CO2 in air and in fumarole-plume gases collected in 2013 during two campaigns at Mount Etna volcano, which were made using a laser-based isotope ratio infrared spectrometer. We performed approximately 360 measurements/h, which allowed calculation of the δ13C values of volcanic CO2. The fumarole gases of Torre del Filosofo (2900 m above sea level) range from -3.24 ± 0.06‰ to -3.71 ± 0.09‰, comparable to isotope ratio mass spectrometry (IRMS) measurements of discrete samples collected on the same dates. Plume gases sampled more than 1 km from the craters show a δ13C = -2.2 ± 0.4‰, in agreement with the crater fumarole gases analyzed by IRMS. Measurements performed along ~17 km driving track from Catania to Mount Etna show more negative δ13C values when passing through populated centers due to anthropogenic-derived CO2 inputs (e.g., car exhaust). The reported results demonstrate that this technique may represent an important advancement for volcanic and environmental monitoring.

  3. Indirect Determination of the Thermodynamic Temperature of a Gold Fixed-Point Cell

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Girard, F.; Florio, M.

    2010-09-01

    Since the value T 90(Au) was fixed on the ITS-90, some determinations of the thermodynamic temperature of the gold point have been performed which form, with other renormalized results of previous measurements by radiation thermometry, the basis for the current best estimates of ( T - T 90)Au = 39.9 mK as elaborated by the CCT-WG4. Such a value, even if consistent with the behavior of T - T 90 differences at lower temperatures, is quite influenced by the low values of T Au as determined with few radiometric measurements. At INRIM, an independent indirect determination of the thermodynamic temperature of gold was performed by means of a radiation thermometry approach. A fixed-point technique was used to realize approximated thermodynamic scales from the Zn point up to the Cu point. A Si-based standard radiation thermometer working at 900 nm and 950 nm was used. The low uncertainty presently associated to the thermodynamic temperature of fixed points and the accuracy of INRIM realizations, allowed scales with an uncertainty lower than 0.03 K in terms of the thermodynamic temperature to be realized. A fixed-point cell filled with gold, 99.999 % in purity, was measured, and its freezing temperature was determined by both interpolation and extrapolation. An average T Au = 1337.395 K was found with a combined standard uncertainty of 23 mK. Such a value is 25 mK higher than the presently available value as derived by the CCT-WG4 value of ( T - T 90)Au = 39.9 mK.

  4. Thermal Energy Harvesting on the Bodily Surfaces of Arms and Legs through a Wearable Thermo-Electric Generator.

    PubMed

    Proto, Antonino; Bibbo, Daniele; Cerny, Martin; Vala, David; Kasik, Vladimir; Peter, Lukas; Conforto, Silvia; Schmid, Maurizio; Penhaker, Marek

    2018-06-13

    This work analyzes the results of measurements on thermal energy harvesting through a wearable Thermo-electric Generator (TEG) placed on the arms and legs. Four large skin areas were chosen as locations for the placement of the TEGs. In order to place the generator on the body, a special manufactured band guaranteed the proper contact between the skin and TEG. Preliminary measurements were performed to find out the value of the resistor load which maximizes the power output. Then, an experimental investigation was conducted for the measurement of harvested energy while users were performing daily activities, such as sitting, walking, jogging, and riding a bike. The generated power values were in the range from 5 to 50 μW. Moreover, a preliminary hypothesis based on the obtained results indicates the possibility to use TEGs on leg for the recognition of locomotion activities. It is due to the rather high and different biomechanical work, produced by the gastrocnemius muscle, while the user is walking rather than jogging or riding a bike. This result reflects a difference between temperatures associated with the performance of different activities.

  5. Reciprocal relations for transmission coefficients - Theory and application

    NASA Technical Reports Server (NTRS)

    Qu, Jianmin; Achenbach, Jan D.; Roberts, Ronald A.

    1989-01-01

    The authors present a rigorous proof of certain intuitively plausible reciprocal relations for time harmonic plane-wave transmission and reflection at the interface between a fluid and an anisotropic elastic solid. Precise forms of the reciprocity relations for the transmission coefficients and for the transmitted energy fluxes are derived, based on the reciprocity theorem of elastodynamics. It is shown that the reciprocity relations can be used in conjunction with measured values of peak amplitudes for transmission through a slab of the solid (water-solid-water) to obtain the water-solid coefficients. Experiments were performed for a slab of a unidirectional fiber-reinforced composite. Good agreement of the experimentally measured transmission coefficients with theoretical values was obtained.

  6. Direct Measurement of the Magnetocaloric Effect in La(Fe,Si,Co) 13 Compounds in Pulsed Magnetic Fields

    DOE PAGES

    Zavareh, M. Ghorbani; Skourski, Y.; Skokov, K. P.; ...

    2017-07-28

    We report on magnetization, magnetostriction, and magnetocaloric-effect measurements of polycrystal-line LaFe 11.74Co 0.13Si 1.13 and LaFe 11.21Co 0.65Si 1.11 performed in both pulsed and static magnetic fields. Although the two compounds behave rather differently at low fields (~ 5 T), they show quite similar values of the magnetocaloric effect, namely a temperature increases of about 20 K at high fields (50-60 T). The magnetostriction and magnetization also reach very similar values here. We are able to quantify the magnetoelastic coupling and, based on that, apply the Bean-Rodbell criterion distinguishing first- and second-order transitions.

  7. Direct Measurement of the Magnetocaloric Effect in La(Fe,Si,Co) 13 Compounds in Pulsed Magnetic Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavareh, M. Ghorbani; Skourski, Y.; Skokov, K. P.

    We report on magnetization, magnetostriction, and magnetocaloric-effect measurements of polycrystal-line LaFe 11.74Co 0.13Si 1.13 and LaFe 11.21Co 0.65Si 1.11 performed in both pulsed and static magnetic fields. Although the two compounds behave rather differently at low fields (~ 5 T), they show quite similar values of the magnetocaloric effect, namely a temperature increases of about 20 K at high fields (50-60 T). The magnetostriction and magnetization also reach very similar values here. We are able to quantify the magnetoelastic coupling and, based on that, apply the Bean-Rodbell criterion distinguishing first- and second-order transitions.

  8. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  9. Extended robust support vector machine based on financial risk minimization.

    PubMed

    Takeda, Akiko; Fujiwara, Shuhei; Kanamori, Takafumi

    2014-11-01

    Financial risk measures have been used recently in machine learning. For example, ν-support vector machine ν-SVM) minimizes the conditional value at risk (CVaR) of margin distribution. The measure is popular in finance because of the subadditivity property, but it is very sensitive to a few outliers in the tail of the distribution. We propose a new classification method, extended robust SVM (ER-SVM), which minimizes an intermediate risk measure between the CVaR and value at risk (VaR) by expecting that the resulting model becomes less sensitive than ν-SVM to outliers. We can regard ER-SVM as an extension of robust SVM, which uses a truncated hinge loss. Numerical experiments imply the ER-SVM's possibility of achieving a better prediction performance with proper parameter setting.

  10. Measurement of the top-quark mass in all-hadronic decays in pp collisions at CDF II.

    PubMed

    Aaltonen, T; Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Budroni, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carillo, S; Carlsmith, D; Carosi, R; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Cyr, D; Daronco, S; Datta, M; D'Auria, S; Davies, T; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; Dituro, P; Dörr, C; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Grundler, U; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kovalev, A; Kraan, A C; Kraus, J; Kravchenko, I; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Manca, G; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moore, R; Morello, M; Fernandez, P Movilla; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nagano, A; Naganoma, J; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ranjan, N; Rappoccio, S; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Saltzberg, D; Sánchez, C; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Sjolin, J; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Veramendi, G; Veszpremi, V; Vidal, R; Vila, I; Vilar, R; Vine, T; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waschke, S; Waters, D; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S

    2007-04-06

    We present a measurement of the top-quark mass Mtop in the all-hadronic decay channel tt-->W+bW-b-->q1q2bq3q4b. The analysis is performed using 310 pb-1 of sqrt[s]=1.96 TeV pp[over ] collisions collected with the CDF II detector using a multijet trigger. The mass measurement is based on an event-by-event likelihood which depends on both the sample purity and the value of the top-quark mass, using 90 possible jet-to-parton assignments in the six-jet final state. The joint likelihood of 290 selected events yields a value of Mtop=177.1+/-4.9(stat)+/-4.7(syst) GeV/c2.

  11. Arterial compliance probe for local blood pulse wave velocity measurement.

    PubMed

    Nabeel, P M; Joseph, Jayaraj; Sivaprakasam, Mohanasankar

    2015-08-01

    Arterial compliance and vessel wall dynamics are significant in vascular diagnosis. We present the design of arterial compliance probes for measurement of local pulse wave velocity (PWV). Two designs of compliance probe are discussed, viz (a) a magnetic plethysmograph (MPG) based probe, and (b) a photoplethysmograph (PPG) based probe. The ability of the local PWV probes to consistently capture carotid blood pulse waves is verified by in-vivo trials on few volunteers. The probes could reliably perform repeatable measurements of local PWV from carotid artery along small artery sections less than 20 mm. Further, correlation between the measured values of local PWV using probes and various measures of blood pressure (BP) was also investigated. The study indicates that such arterial compliance probes have strong potential in cuff less BP monitoring.

  12. An Evaluation of Material Properties Using EMA and FEM

    NASA Astrophysics Data System (ADS)

    Ďuriš, Rastislav; Labašová, Eva

    2016-12-01

    The main goal of the paper is the determination of material properties from experimentally measured natural frequencies. A combination of two approaches to structural dynamics testing was applied: the experimental measurements of natural frequencies were performed by Experimental Modal Analysis (EMA) and the numerical simulations, were carried out by Finite Element Analysis (FEA). The optimization methods were used to determine the values of density and elasticity modulus of a specimen based on the experimental results.

  13. Comparison of pattern VEP results acquired using CRT and TFT stimulators in the clinical practice.

    PubMed

    Nagy, Balázs Vince; Gémesi, Szabolcs; Heller, Dávid; Magyar, András; Farkas, Agnes; Abrahám, György; Varsányi, Balázs

    2011-06-01

    There are several electrophysiological systems available commercially. Usually, control groups are required to compare their results, due to the differences between display types. Our aim was to examine the differences between CRT and LCD/TFT stimulators used in pattern VEP responses performed according to the ISCEV standards. We also aimed to check different contrast values toward thresholds. In order to obtain more precise results, we intended to measure the intensity and temporal response characteristics of the monitors with photometric methods. To record VEP signals, a Roland RetiPort electrophysiological system was used. The pattern VEP tests were carried out according to ISCEV protocols on a CRT and a TFT monitor consecutively. Achromatic checkerboard pattern was used at three different contrast levels (maximal, 75, 25%) using 1° and 15' check sizes. Both CRT and TFT displays were luminance and contrast matched, according to the gamma functions based on measurements at several DAC values. Monitor-specific luminance parameters were measured by means of spectroradiometric instruments. Temporal differences between the displays' electronic and radiometric signals were measured with a device specifically built for the purpose. We tested six healthy control subjects with visual acuity of at least 20/20. The tests were performed on each subject three times on different days. We found significant temporal differences between the CRT and the LCD monitors at all contrast levels and spatial frequencies. In average, the latency times were 9.0 ms (±3.3 ms) longer with the TFT stimulator. This value is in accordance with the average of the measured TFT input-output temporal difference values (10.1 ± 2.2 ms). According to our findings, measuring the temporal parameters of the TFT monitor with an adequately calibrated measurement setup and correcting the VEP data with the resulting values, the VEP signals obtained with different display types can be transformed to be comparable.

  14. Diagnostic value of potassium level in a spot urine sample as an index of 24-hour urinary potassium excretion in unselected patients hospitalized in a hypertension unit

    PubMed Central

    Symonides, Bartosz; Wojciechowska, Ewa; Gryglas, Adam; Gaciong, Zbigniew

    2017-01-01

    Background Primary hyperaldosteronism may be associated with elevated 24-hour urinary potassium excretion. We evaluated the diagnostic value of spot urine (SU) potassium as an index of 24-hour urinary potassium excretion. Methods We measured SU and 24-hour urinary collection potassium and creatinine in 382 patients. Correlations between SU and 24-hour collections were assessed for potassium levels and potassium/creatinine ratios. We used the PAHO formula to estimate 24-hour urinary potassium excretion based on SU potassium level. The agreement between estimated and measured 24-hour urinary potassium excretion was evaluated using the Bland-Altman method. To evaluate diagnostic performance of SU potassium, we calculated areas under the curve (AUC) for SU potassium/creatinine ratio and 24-hour urinary potassium excretion estimated using the PAHO formula. Results Strongest correlation between SU and 24-hour collection was found for potassium/creatinine ratio (r = 0.69, P<0.001). The PAHO formula underestimated 24-hour urinary potassium excretion by mean 8.3±18 mmol/d (95% limits of agreement -28 to +44 mmol/d). Diagnostic performance of SU potassium/creatinine ratio was borderline good only if 24-hour urinary potassium excretion was largely elevated (AUC 0.802 for 120 mmol K+/24 h) but poor with lower values (AUC 0.696 for 100 mmol K+/24 h, 0.636 for 80 mmol K+/24 h, 0.675 for 40 mmol K+/24 h). Diagnostic performance of 24-hour urinary potassium excretion estimated by the PAHO formula was excellent with values above 120 mmol/d and good with lower values (AUC 0.941 for 120 mmol K+/24 h, 0.819 for 100 mmol K+/24 h, 0.823 for 80 mmol K+/24 h, 0.836 for 40 mmol K+/24 h). Conclusions Spot urine potassium/creatinine ratio might be a marker of increased 24-hour urinary potassium excretion and a potentially useful screening test when reliable 24-hour urine collection is not available. The PAHO formula allowed estimation of the 24-hour urinary potassium excretion based on SU measurements with reasonable clinical accuracy. PMID:28662194

  15. Prospective evaluation of risk of vertebral fractures using quantitative ultrasound measurements and bone mineral density in a population-based sample of postmenopausal women: results of the Basel Osteoporosis Study.

    PubMed

    Hollaender, R; Hartl, F; Krieg, M-A; Tyndall, A; Geuckel, C; Buitrago-Tellez, C; Manghani, M; Kraenzlin, M; Theiler, R; Hans, D

    2009-03-01

    Prospective studies have shown that quantitative ultrasound (QUS) techniques predict the risk of fracture of the proximal femur with similar standardised risk ratios to dual-energy x-ray absorptiometry (DXA). Few studies have investigated these devices for the prediction of vertebral fractures. The Basel Osteoporosis Study (BOS) is a population-based prospective study to assess the performance of QUS devices and DXA in predicting incident vertebral fractures. 432 women aged 60-80 years were followed-up for 3 years. Incident vertebral fractures were assessed radiologically. Bone measurements using DXA (spine and hip) and QUS measurements (calcaneus and proximal phalanges) were performed. Measurements were assessed for their value in predicting incident vertebral fractures using logistic regression. QUS measurements at the calcaneus and DXA measurements discriminated between women with and without incident vertebral fracture, (20% height reduction). The relative risks (RRs) for vertebral fracture, adjusted for age, were 2.3 for the Stiffness Index (SI) and 2.8 for the Quantitative Ultrasound Index (QUI) at the calcaneus and 2.0 for bone mineral density at the lumbar spine. The predictive value (AUC (95% CI)) of QUS measurements at the calcaneus remained highly significant (0.70 for SI, 0.72 for the QUI, and 0.67 for DXA at the lumbar spine) even after adjustment for other confounding variables. QUS of the calcaneus and bone mineral density measurements were shown to be significant predictors of incident vertebral fracture. The RRs for QUS measurements at the calcaneus are of similar magnitude as for DXA measurements.

  16. Can conclusions drawn from phantom-based image noise assessments be generalized to in vivo studies for the nonlinear model-based iterative reconstruction method?

    PubMed Central

    Gomez-Cardona, Daniel; Li, Ke; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2016-01-01

    Purpose: Phantom-based objective image quality assessment methods are widely used in the medical physics community. For a filtered backprojection (FBP) reconstruction-based linear or quasilinear imaging system, the use of this methodology is well justified. Many key image quality metrics acquired with phantom studies can be directly applied to in vivo human subject studies. Recently, a variety of image quality metrics have been investigated for model-based iterative image reconstruction (MBIR) methods and several novel characteristics have been discovered in phantom studies. However, the following question remains unanswered: can certain results obtained from phantom studies be generalized to in vivo animal studies and human subject studies? The purpose of this paper is to address this question. Methods: One of the most striking results obtained from phantom studies is a novel power-law relationship between noise variance of MBIR (σ2) and tube current-rotation time product (mAs): σ2 ∝ (mAs)−0.4 [K. Li et al., “Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance,” Med. Phys. 41, 041906 (15pp.) (2014)]. To examine whether the same power-law works for in vivo cases, experimental data from two types of in vivo studies were analyzed in this paper. All scans were performed with a 64-slice diagnostic CT scanner (Discovery CT750 HD, GE Healthcare) and reconstructed with both FBP and a MBIR method (Veo, GE Healthcare). An Institutional Animal Care and Use Committee-approved in vivo animal study was performed with an adult swine at six mAs levels (10–290). Additionally, human subject data (a total of 110 subjects) acquired from an IRB-approved clinical trial were analyzed. In this clinical trial, a reduced-mAs scan was performed immediately following the standard mAs scan; the specific mAs used for the two scans varied across human subjects and were determined based on patient size and clinical indications. The measurements of σ2 were performed at different mAs by drawing regions-of-interest (ROIs) in the liver and the subcutaneous fat. By applying a linear least-squares regression, the β values in the power-law relationship σ2 ∝ (mAs)−β were measured for the in vivo data and compared with the value found in phantom experiments. Results: For the in vivo swine study, an exponent of β = 0.43 was found for MBIR, and the coefficient of determination (R2) for the corresponding least-squares power-law regression was 0.971. As a reference, the β and R2 values for FBP were found to be 0.98 and 0.997, respectively, from the same study, which are consistent with the well-known σ2 ∝ (mAs)−1.0 relationship for linear CT systems. For the human subject study, the measured β values for the MBIR images were 0.41 ± 0.12 in the liver and 0.37 ± 0.12 in subcutaneous fat. In comparison, the β values for the FBP images were 1.04 ± 0.10 in the liver and 0.97 ± 0.12 in subcutaneous fat. The β values of MBIR and FBP obtained from the in vivo studies were found to be statistically equivalent to the corresponding β values from the phantom study within an equivalency interval of [ − 0.1, 0.1] (p < 0.05); across MBIR and FBP, the difference in β was statistically significant (p < 0.05). Conclusions: Despite the nonlinear nature of the MBIR method, the power-law relationship, σ2 ∝ (mAs)−0.4, found from phantom studies can be applied to in vivo animal and human subject studies. PMID:26843232

  17. Can conclusions drawn from phantom-based image noise assessments be generalized to in vivo studies for the nonlinear model-based iterative reconstruction method?

    PubMed

    Gomez-Cardona, Daniel; Li, Ke; Hsieh, Jiang; Lubner, Meghan G; Pickhardt, Perry J; Chen, Guang-Hong

    2016-02-01

    Phantom-based objective image quality assessment methods are widely used in the medical physics community. For a filtered backprojection (FBP) reconstruction-based linear or quasilinear imaging system, the use of this methodology is well justified. Many key image quality metrics acquired with phantom studies can be directly applied to in vivo human subject studies. Recently, a variety of image quality metrics have been investigated for model-based iterative image reconstruction (MBIR) methods and several novel characteristics have been discovered in phantom studies. However, the following question remains unanswered: can certain results obtained from phantom studies be generalized to in vivo animal studies and human subject studies? The purpose of this paper is to address this question. One of the most striking results obtained from phantom studies is a novel power-law relationship between noise variance of MBIR (σ(2)) and tube current-rotation time product (mAs): σ(2) ∝ (mAs)(-0.4) [K. Li et al., "Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance," Med. Phys. 41, 041906 (15pp.) (2014)]. To examine whether the same power-law works for in vivo cases, experimental data from two types of in vivo studies were analyzed in this paper. All scans were performed with a 64-slice diagnostic CT scanner (Discovery CT750 HD, GE Healthcare) and reconstructed with both FBP and a MBIR method (Veo, GE Healthcare). An Institutional Animal Care and Use Committee-approved in vivo animal study was performed with an adult swine at six mAs levels (10-290). Additionally, human subject data (a total of 110 subjects) acquired from an IRB-approved clinical trial were analyzed. In this clinical trial, a reduced-mAs scan was performed immediately following the standard mAs scan; the specific mAs used for the two scans varied across human subjects and were determined based on patient size and clinical indications. The measurements of σ(2) were performed at different mAs by drawing regions-of-interest (ROIs) in the liver and the subcutaneous fat. By applying a linear least-squares regression, the β values in the power-law relationship σ(2) ∝ (mAs)(-β) were measured for the in vivo data and compared with the value found in phantom experiments. For the in vivo swine study, an exponent of β = 0.43 was found for MBIR, and the coefficient of determination (R(2)) for the corresponding least-squares power-law regression was 0.971. As a reference, the β and R(2) values for FBP were found to be 0.98 and 0.997, respectively, from the same study, which are consistent with the well-known σ(2) ∝ (mAs)(-1.0) relationship for linear CT systems. For the human subject study, the measured β values for the MBIR images were 0.41 ± 0.12 in the liver and 0.37 ± 0.12 in subcutaneous fat. In comparison, the β values for the FBP images were 1.04 ± 0.10 in the liver and 0.97 ± 0.12 in subcutaneous fat. The β values of MBIR and FBP obtained from the in vivo studies were found to be statistically equivalent to the corresponding β values from the phantom study within an equivalency interval of [ - 0.1, 0.1] (p < 0.05); across MBIR and FBP, the difference in β was statistically significant (p < 0.05). Despite the nonlinear nature of the MBIR method, the power-law relationship, σ(2) ∝ (mAs)(-0.4), found from phantom studies can be applied to in vivo animal and human subject studies.

  18. Influence of non-ideal performance of lasers on displacement precision in single-grating heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Xie, Xuedong; Yan, Shuhua

    2010-10-01

    Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .

  19. Issue-Relevant Values and Opinions About Gay Rights: Beyond Equality and Morality.

    PubMed

    Rhodebeck, Laurie

    2018-01-01

    Although many studies have examined the role of values in shaping public opinion, the number of values that inform this research is limited. This article employs the concept of issue-relevant values as a means to explore the broader range of values associated with policy issues. After discussing the concept in general terms, the article explores issue-relevant values pertinent to public opinion about gay rights. Using the policy examples of employment nondiscrimination and same-sex couple adoption, the present study identifies, measures, and assesses several values that add to the very short list previously used to explain public opinion about gay rights issues. Content from interest-group Web sites and news media coverage of the two issues aided in identifying the values. Data from an original Internet survey yield valid measures of the values. Multivariate analyses indicate that the values behave in predictable ways: they are strongly influenced by partisanship, and they strongly affect opinions about the two issues. The performance of the values is consistent with findings from previous research on the partisan basis of values and the value-based nature of opinions. The article concludes with suggestions for further empirical and theoretical work that could apply and extend the concept of issue-relevant values.

  20. Thermal Infrared Spectrometer for Earth Science Remote Sensing Applications—Instrument Modifications and Measurement Procedures

    PubMed Central

    Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek

    2011-01-01

    In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards. PMID:22346683

  1. Thermal infrared spectrometer for Earth science remote sensing applications-instrument modifications and measurement procedures.

    PubMed

    Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek

    2011-01-01

    In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards.

  2. [Telemetry data based on comparative study of physical activity in patients with resynchronization device].

    PubMed

    Melczer, Csaba; Melczer, László; Goják, Ilona; Kónyi, Attila; Szabados, Sándor; Raposa, L Bence; Oláh, András; Ács, Pongrác

    2017-05-01

    The effect of regular physical activity on health is widely recognized, but several studies have shown its key importance for heart patients. The present study aimed to define the PA % values, and to convert them into metabolic equivalent values (MET), which describes oxygen consumption during physical activity. A total of seventeen patients with heart disease; 3 females and 14 males; age: 57.35 yrs ± 9.54; body mass 98.71 ± 9.89 kg; average BMI 36.69 ± 3.67 were recruited into the study. The measured values from Cardiac Resynchronisation Therapy devices and outer accelerometers (ActiGraph GT3X+) were studied over a 7-day time period. Using the two sets of values describing physical performance, linear regression was calculated providing a mathematical equation, thus, the Physical Activity values in percentage were converted into MET values. During the 6-minute walk test the patients achieved an average of 416.6 ± 48.2 m. During 6MWT the measured values averaged at 1.85 ± 0.18 MET's, and MET values averaged at 1.12 ± 0.06 per week. It clearly shows that this test is a challenge for the patients compared to their daily regular physical activity levels. With our method, based on the values received from the physical activity sensor implanted into the resynchronisation devices, changes in patients' health status could be monitored telemetrically with the assistance from the implanted electronic device. Orv Hetil. 2017; 158(17): 748-753.

  3. A storm-based CSLE incorporating the modified SCS-CN method for soil loss prediction on the Chinese Loess Plateau

    NASA Astrophysics Data System (ADS)

    Shi, Wenhai; Huang, Mingbin

    2017-04-01

    The Chinese Loess Plateau is one of the most erodible areas in the world. In order to reduce soil and water losses, suitable conservation practices need to be designed. For this purpose, there is an increasing demand for an appropriate model that can accurately predict storm-based surface runoff and soil losses on the Loess Plateau. The Chinese Soil Loss Equation (CSLE) has been widely used in this region to assess soil losses from different land use types. However, the CSLE was intended only to predict the mean annual gross soil loss. In this study, a CSLE was proposed that would be storm-based and that introduced a new rainfall-runoff erosivity factor. A dataset was compiled that comprised measurements of soil losses during individual storms from three runoff-erosion plots in each of three different watersheds in the gully region of the Plateau for 3-7 years in three different time periods (1956-1959; 1973-1980; 2010-13). The accuracy of the soil loss predictions made by the new storm-based CSLE was determined using the data for the six plots in two of the watersheds measured during 165 storm-runoff events. The performance of the storm-based CSLE was further compared with the performance of the storm-based Revised Universal Soil Loss Equation (RUSLE) for the same six plots. During the calibration (83 storms) and validation (82 storms) of the storm-based CSLE, the model efficiency, E, was 87.7% and 88.9%, respectively, while the root mean square error (RMSE) was 2.7 and 2.3 t ha-1 indicating a high degree of accuracy. Furthermore, the storm-based CSLE performed better than the storm-based RULSE (E: 75.8% and 70.3%; RMSE: 3.8 and 3.7 t ha-1, for the calibration and validation storms, respectively). The storm-based CSLE was then used to predict the soil losses from the three experimental plots in the third watershed. For these predictions, the model parameter values, previously determined by the calibration based on the data from the initial six plots, were used in the storm-based CSLE. In addition, the surface runoff used by the storm-based CSLE was either obtained from measurements or from the values predicted by the modified Soil Conservation Service Curve Number (SCS-CN) method. When using the measured runoff, the storm-based CSLE had an E of 76.6%, whereas the use of the predicted runoff gave an E of 76.4%. The high E values indicated that the storm-based CSLE incorporating the modified SCS-CN method could accurately predict storm-event-based soil losses resulting from both sheet and rill erosion at the field scale on the Chinese Loess Plateau. This approach could be applicable to other areas of the world once the model parameters have been suitably calibrated.

  4. A new method to make 2-D wear measurements less sensitive to projection differences of cemented THAs.

    PubMed

    The, Bertram; Flivik, Gunnar; Diercks, Ron L; Verdonschot, Nico

    2008-03-01

    Wear curves from individual patients often show unexplained irregular wear curves or impossible values (negative wear). We postulated errors of two-dimensional wear measurements are mainly the result of radiographic projection differences. We tested a new method that makes two-dimensional wear measurements less sensitive for radiograph projection differences of cemented THAs. The measurement errors that occur when radiographically projecting a three-dimensional THA were modeled. Based on the model, we developed a method to reduce the errors, thus approximating three-dimensional linear wear values, which are less sensitive for projection differences. An error analysis was performed by virtually simulating 144 wear measurements under varying conditions with and without application of the correction: the mean absolute error was reduced from 1.8 mm (range, 0-4.51 mm) to 0.11 mm (range, 0-0.27 mm). For clinical validation, radiostereometric analysis was performed on 47 patients to determine the true wear at 1, 2, and 5 years. Subsequently, wear was measured on conventional radiographs with and without the correction: the overall occurrence of errors greater than 0.2 mm was reduced from 35% to 15%. Wear measurements are less sensitive to differences in two-dimensional projection of the THA when using the correction method.

  5. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  6. An Introduction to Value-Added Analysis

    ERIC Educational Resources Information Center

    Costello, Ron; Elson, Peggy; Schacter, John

    2008-01-01

    For the last 3 years, more than 80% of the respondents to the "Annual Phi Delta Kappa/Gallup Poll of the Public's Attitudes Toward the Public Schools" have stated that they would rather see a school's performance measure based upon "improvement shown by students" than the "percentage passing the test" (Rose & Gallup, 2007, p. 35). If this were to…

  7. 40 CFR 125.137 - As an owner or operator of a new offshore oil and gas extraction facility, must I perform...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reproduction, larval recruitment, and peak abundance identified during the Source Water Baseline Biological... across the screens and correlate the measured value with the design intake velocity. The head loss across... professional judgment based on available hydrological data). The maximum head loss across the screen for each...

  8. Service Quality in Higher Education--A Case Study of Universiti Brunei Darussalam

    ERIC Educational Resources Information Center

    Alani, Farooq; Yaqoub, Yasir; Hamdan, Mahani

    2015-01-01

    No one doubts the value and importance of quality education, and quality assurance is one major driving force to achieve this. Measuring the performance of service quality of education services of Universiti Brunei Darussalam (UBD), as part of the quality assessment, was assessed based on Parasuraman's five Servqual dimensions. The assessment was…

  9. Do emotional intelligence and previous caring experience influence student nurse performance? A comparative analysis.

    PubMed

    Stenhouse, Rosie; Snowden, Austyn; Young, Jenny; Carver, Fiona; Carver, Hannah; Brown, Norrie

    2016-08-01

    Reports of poor nursing care have focused attention on values based selection of candidates onto nursing programmes. Values based selection lacks clarity and valid measures. Previous caring experience might lead to better care. Emotional intelligence (EI) might be associated with performance, is conceptualised and measurable. To examine the impact of 1) previous caring experience, 2) emotional intelligence 3) social connection scores on performance and retention in a cohort of first year nursing and midwifery students in Scotland. A longitudinal, quasi experimental design. Adult and mental health nursing, and midwifery programmes in a Scottish University. Adult, mental health and midwifery students (n=598) completed the Trait Emotional Intelligence Questionnaire-short form and Schutte's Emotional Intelligence Scale on entry to their programmes at a Scottish University, alongside demographic and previous caring experience data. Social connection was calculated from a subset of questions identified within the TEIQue-SF in a prior factor and Rasch analysis. Student performance was calculated as the mean mark across the year. Withdrawal data were gathered. 598 students completed baseline measures. 315 students declared previous caring experience, 277 not. An independent-samples t-test identified that those without previous caring experience scored higher on performance (57.33±11.38) than those with previous caring experience (54.87±11.19), a statistically significant difference of 2.47 (95% CI, 0.54 to 4.38), t(533)=2.52, p=.012. Emotional intelligence scores were not associated with performance. Social connection scores for those withdrawing (mean rank=249) and those remaining (mean rank=304.75) were statistically significantly different, U=15,300, z=-2.61, p$_amp_$lt;0.009. Previous caring experience led to worse performance in this cohort. Emotional intelligence was not a useful indicator of performance. Lower scores on the social connection factor were associated with withdrawal from the course. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Weaning mechanical ventilation after off-pump coronary artery bypass graft procedures directed by noninvasive gas measurements.

    PubMed

    Chakravarthy, Murali; Narayan, Sandeep; Govindarajan, Raghav; Jawali, Vivek; Rajeev, Subramanyam

    2010-06-01

    Partial pressure of carbon dioxide and oxygen were transcutaneously measured in adults after off-pump coronary artery bypass (OPCAB) surgery. The clinical use of such measurements and interchangeability with arterial blood gas measurements for weaning patients from postoperative mechanical ventilation were assessed. This was a prospective observational study. Tertiary referral heart hospital. Postoperative OPCAB surgical patients. Transcutaneous oxygen and carbon dioxide measurements. In this prospective observational study, 32 consecutive adult patients in a tertiary care medical center underwent OPCAB surgery. Noninvasive measurement of respiratory gases was performed during the postoperative period and compared with arterial blood gases. The investigator was blinded to the reports of arterial blood gas studies and weaned patients using a "weaning protocol" based on transcutaneous gas measurement. The number of patients successfully weaned based on transcutaneous measurements and the number of times the weaning process was held up were noted. A total of 212 samples (pairs of arterial and transcutaneous values of oxygen and carbon dioxide) were obtained from 32 patients. Bland-Altman plots and mountain plots were used to analyze the interchangeability of the data. Twenty-five (79%) of the patients were weaned from the ventilator based on transcutaneous gas measurements alone. Transcutaneous carbon dioxide measurements were found to be interchangeable with arterial carbon dioxide during 96% of measurements, versus 79% for oxygen measurements. More than three fourths of the patients were weaned from mechanical ventilation and extubated based on transcutaneous gas values alone after OPCAB surgery. The noninvasive transcutaneous carbon dioxide measurement can be used as a surrogate for arterial carbon dioxide measurement to manage postoperative OPCAB patients. Copyright 2010 Elsevier Inc. All rights reserved.

  11. Analytical model for out-of-field dose in photon craniospinal irradiation

    NASA Astrophysics Data System (ADS)

    Taddei, Phillip J.; Jalbout, Wassim; Howell, Rebecca M.; Khater, Nabil; Geara, Fady; Homann, Kenneth; Newhauser, Wayne D.

    2013-11-01

    The prediction of late effects after radiotherapy in organs outside a treatment field requires accurate estimations of out-of-field dose. However, out-of-field dose is not calculated accurately by commercial treatment planning systems (TPSs). The purpose of this study was to develop and test an analytical model for out-of-field dose during craniospinal irradiation (CSI) from photon beams produced by a linear accelerator. In two separate evaluations of the model, we measured absorbed dose for a 6 MV CSI using thermoluminescent dosimeters placed throughout an anthropomorphic phantom and fit the measured data to an analytical model of absorbed dose versus distance outside of the composite field edge. These measurements were performed in two separate clinics—the University of Texas MD Anderson Cancer Center (MD Anderson) and the American University of Beirut Medical Center (AUBMC)—using the same phantom but different linear accelerators and TPSs commissioned for patient treatments. The measurement at AUBMC also included in-field locations. Measured dose values were compared to those predicted by TPSs and parameters were fit to the model in each setting. In each clinic, 95% of the measured data were contained within a factor of 0.2 and one root mean square deviation of the model-based values. The root mean square deviations of the mathematical model were 0.91 cGy Gy-1 and 1.67 cGy Gy-1 in the MD Anderson and AUBMC clinics, respectively. The TPS predictions agreed poorly with measurements in regions of sharp dose gradient, e.g., near the field edge. At distances greater than 1 cm from the field edge, the TPS underestimated the dose by an average of 14% ± 24% and 44% ± 19% in the MD Anderson and AUBMC clinics, respectively. The in-field measured dose values of the measurement at AUBMC matched the dose values calculated by the TPS to within 2%. Dose algorithms in TPSs systematically underestimated the actual out-of-field dose. Therefore, it is important to use an improved model based on measurements when estimating out-of-field dose. The model proposed in this study performed well for this purpose in two clinics and may be applicable in other clinics with similar treatment field configurations.

  12. Intercomparison of 51 radiometers for determining global horizontal irradiance and direct normal irradiance measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measurements require properly installed and maintained radiometers with calibrations traceable to the World Radiometric Reference. This study analyzes the performance of 51 commercially available and prototype radiometers used for measuring global horizontal irradiances or direct normal irradiances. These include pyranometers, pyrheliometers, rotating shadowband radiometers, and a pyranometer with an internal shading mask deployed at the National Renewable Energy Laboratory's (NREL) Solar Radiation Research Laboratory. The radiometers in this study were deployed for one year (from April 1, 2011, through March 31, 2012), and their measurements were compared under clear-sky, partly cloudy, and mostly cloudy conditions to referencemore » values of low estimated measurement uncertainties. The intent of this paper is to present a general overview of each radiometer's performance based on the instrumentation and environmental conditions available at NREL.« less

  13. Temperature measurement in a gas turbine engine combustor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul

    A method and system for determining a temperature of a working gas passing through a passage to a turbine section of a gas turbine engine. The method includes identifying an acoustic frequency at a first location in the engine upstream from the turbine section, and using the acoustic frequency for determining a first temperature value at the first location that is directly proportional to the acoustic frequency and a calculated constant value. A second temperature of the working gas is determined at a second location in the engine and, using the second temperature, a back calculation is performed to determinemore » a temperature value for the working gas at the first location. The first temperature value is compared to the back calculated temperature value to change the calculated constant value to a recalculated constant value. Subsequent first temperature values at the first location may be determined based on the recalculated constant value.« less

  14. What Is the Value of Value-Based Purchasing?

    PubMed

    Tanenbaum, Sandra J

    2016-10-01

    Value-based purchasing (VBP) is a widely favored strategy for improving the US health care system. The meaning of value that predominates in VBP schemes is (1) conformance to selected process and/or outcome metrics, and sometimes (2) such conformance at the lowest possible cost. In other words, VBP schemes choose some number of "quality indicators" and financially incent providers to meet them (and not others). Process measures are usually based on clinical science that cannot determine the effects of a process on individual patients or patients with comorbidities, and do not necessarily measure effects that patients value; additionally, there is no provision for different patients valuing different things. Proximate outcome measures may or may not predict distal ones, and the more distal the outcome, the less reliably it can be attributed to health care. Outcome measures may be quite rudimentary, such as mortality rates, or highly contestable: survival or function after prostate surgery? When cost is an element of value-based purchasing, it is the cost to the value-based payer and not to other payers or patients' families. The greatest value of value-based purchasing may not be to patients or even payers, but to policy makers seeking a morally justifiable alternative to politically contested regulatory policies. Copyright © 2016 by Duke University Press.

  15. Diagnostic performance of labial minor salivary gland flow measurement for assessment of xerostomia.

    PubMed

    Satoh-Kuriwada, Shizuko; Iikubo, Masahiro; Shoji, Noriaki; Sakamoto, Maya; Sasano, Takashi

    2012-08-01

    Minor salivary gland flow rate (MF) has been proposed as a key feature of xerostomia (subjective feeling of dry mouth). To assess its diagnostic performance, MF was compared in xerostomia and control subjects. Sixty-six subjects with xerostomia and 30 controls were enrolled. MF was measured in the lower labial mucosa using the iodine-starch filter paper method. Stimulated whole salivary flow rates were also measured using the gum test (stimulated-WF). Both labial-MF and stimulated-WF were significantly lower in xerostomia subjects than in controls. There was a positive correlation between labial-MF and stimulated-WF in control but not xerostomia subjects. In xerostomia subjects compared to controls, there was a significantly larger reduction in labial-MF than in stimulated-WF. Xerostomia was most accurately diagnosed using a labial-MF cutoff value of 0.25 μL/cm(2)/min. The sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy at this cutoff value were 1.00, 0.87, 0.93, 1.00, and 0.96, respectively. Compared to respective values of 0.64, 1.00, 1.00, 0.56, and 0.75 for stimulated-WF at the traditional cutoff of 1.0 mL/min, these data indicate the higher sensitivity, negative predictive value, and diagnostic accuracy of labial-MF. Xerostomia was more strongly related to reduction of labial-MF than to that of stimulated-WF. Xerostomia was most likely triggered at a labial-MF cut-off value of 0.25 μL/cm(2)/min based on results from the iodine-starch method. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridge, Pete, E-mail: pete.bridge@qut.edu.au; Gunn, Therese; Kastanis, Lazaros

    A novel realistic 3D virtual reality (VR) application has been developed to allow medical imaging students at Queensland University of Technology to practice radiographic techniques independently outside the usual radiography laboratory. A flexible agile development methodology was used to create the software rapidly and effectively. A 3D gaming environment and realistic models were used to engender presence in the software while tutor-determined gold standards enabled students to compare their performance and learn in a problem-based learning pedagogy. Students reported high levels of satisfaction and perceived value and the software enabled up to 40 concurrent users to prepare for clinical practice.more » Student feedback also indicated that they found 3D to be of limited value in the desktop version compared to the usual 2D approach. A randomised comparison between groups receiving software-based and traditional practice measured performance in a formative role play with real equipment. The results of this work indicated superior performance with the equipment for the VR trained students (P = 0.0366) and confirmed the value of VR for enhancing 3D equipment-based problem-solving skills. Students practising projection techniques virtually performed better at role play assessments than students practising in a traditional radiography laboratory only. The application particularly helped with 3D equipment configuration, suggesting that teaching 3D problem solving is an ideal use of such medical equipment simulators. Ongoing development work aims to establish the role of VR software in preparing students for clinical practice with a range of medical imaging equipment.« less

  17. Pay for performance: will dentistry follow?

    PubMed

    Voinea-Griffin, Andreea; Fellows, Jeffrey L; Rindal, Donald B; Barasch, Andrei; Gilbert, Gregg H; Safford, Monika M

    2010-04-28

    "Pay for performance" is an incentive system that has been gaining acceptance in medicine and is currently being considered for implementation in dentistry. However, it remains unclear whether pay for performance can effect significant and lasting changes in provider behavior and quality of care. Provider acceptance will likely increase if pay for performance programs reward true quality. Therefore, we adopted a quality-oriented approach in reviewing those factors which could influence whether it will be embraced by the dental profession. The factors contributing to the adoption of value-based purchasing were categorized according to the Donabedian quality of care framework. We identified the dental insurance market, the dental profession position, the organization of dental practice, and the dental patient involvement as structural factors influencing the way dental care is practiced and paid for. After considering variations in dental care and the early stage of development for evidence-based dentistry, the scarcity of outcome indicators, lack of clinical markers, inconsistent use of diagnostic codes and scarcity of electronic dental records, we concluded that, for pay for performance programs to be successfully implemented in dentistry, the dental profession and health services researchers should: 1) expand the knowledge base; 2) increase considerably evidence-based clinical guidelines; and 3) create evidence-based performance measures tied to existing clinical practice guidelines. In this paper, we explored factors that would influence the adoption of value-based purchasing programs in dentistry. Although none of these factors were essential deterrents for the implementation of pay for performance programs in medicine, the aggregate seems to indicate that significant changes are needed before this type of program could be considered a realistic option in dentistry.

  18. Characterization of bovine cartilage by fiber Bragg grating-based stress relaxation measurements

    NASA Astrophysics Data System (ADS)

    Baier, V.; Marchi, G.; Foehr, P.; Burgkart, R.; Roths, J.

    2017-04-01

    A fiber-based device for testing mechanical properties of cartilage is presented within this study. The measurement principle is based on stepwise indentation into the tissue and observing of corresponding relaxation of the stress. The indenter tip is constituted of a cleaved optical fiber that includes a fiber Bragg grating which is used as the force sensor. Stress relaxation measurements at 25 different positions on a healthy bovine cartilage sample were performed to assess the behavior of healthy cartilage. For each indentation step a good agreement was found with a viscoelastic model that included two time constants. The model parameters showed low variability and a clear dependence with indentation depth. The parameters can be used as reference values for discriminating healthy and degenerated cartilage.

  19. Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)

    2016-01-01

    The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.

  20. Breast elastography: Identification of benign and malignant cancer based on absolute elastic modulus measurement using vibro-elastography

    NASA Astrophysics Data System (ADS)

    Arroyo, Junior; Saavedra, Ana Cecilia; Guerrero, Jorge; Montenegro, Pilar; Aguilar, Jorge; Pinto, Joseph A.; Lobo, Julio; Salcudean, Tim; Lavarello, Roberto; Castañeda, Benjamín.

    2018-03-01

    Breast cancer is a public health problem with 1.7 million new cases per year worldwide and with several limitations in the state-of-art screening techniques. Ultrasound elastography involves a set of techniques intended to facilitate the noninvasive diagnosis of cancer. Among these, Vibro-elastography is an ultrasound-based technique that employs external mechanical excitation to infer the elastic properties of soft tissue. In this paper, we evaluate the Vibro-elastography performance in the differentiation of benign and malignant breast lesions. For this study, a group of 18 women with clinically confirmed tumors or suspected malignant breast lesions were invited to participate. For each volunteer, an elastogram was obtained, and the mean elasticity of the lesion and the adjacent healthy tissue were calculated. After the acquisition, the volunteers underwent core-needle biopsy. The histopathological results allowed to validate the Vibro-elastography diagnosis, which ranged from benign to malignant lesions. Results indicate that the mean elasticity value of the benign lesions, malignant lesions and healthy breast tissue were 39.4 +/- 12 KPa, 55.4 +/- 7.02 KPa and 23.91 +/- 4.57 kPa, respectively. The classification between benign and malignant breast cancer was performed using Support Vector Machine based on the measured lesion stiffness. A ROC curve permitted to quantify the accuracy of the differentiation and to define a suitable cutoff value of stiffness, obtaining an AUC of 0.90 and a cutoff value of 44.75 KPa. The results obtained suggest that Vibro-elastography allows differentiating between benign and malignant lesions. Furthermore, the elasticity values obtained for benign, malignant and healthy tissue are consistent with previous reports.

  1. Measurement uncertainty analysis techniques applied to PV performance measurements

    NASA Astrophysics Data System (ADS)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  2. Dynamic surface tension measurement for the screening of biosurfactants produced by Lactobacillus plantarum subsp. plantarum PTCC 1896.

    PubMed

    Bakhshi, Nafiseh; Soleimanian-Zad, Sabihe; Sheikh-Zeinoddin, Mahmoud

    2017-06-01

    Currently, screening of microbial biosurfactants (BSs) is based on their equilibrium surface tension values obtained using static surface tension measurement. However, a good surfactant should not only have a low equilibrium surface tension, but its dynamic surface tension (DST) should also decrease rapidly with time. In this study, screening of BSs produced by Lactobacillus plantarum subsp. plantarum PTCC 1896 (probiotic) was performed based on their DST values measured by Wilhelmy plate tensiometry. The relationship between DST and structural and functional properties (anti-adhesive activity) of the BSs was investigated. The results showed that the changes in the yield, productivity and structure of the BSs were growth medium and incubation time dependent (p<0.05). Structurally different BSs produced exhibited identical equilibrium surface tension values. However, differences among the structure/yield of the BSs were observed through the measurement of their DST. The considerable dependence of DST on the concentration and composition of the BS proteins was observed (p<0.05). Moreover, the anti-adhesive activity of the BS was found to be positively correlated with its DST. The results suggest that the DST measurement could serve as an efficient method for the clever screening of BSs producer/production condition, and consequently, for the investigation of probiotic features of bacteria, since the anti-adhesive activity is an important criterion of probiotics. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Measuring cardiac waste: the premier cardiac waste measures.

    PubMed

    Lowe, Timothy J; Partovian, Chohreh; Kroch, Eugene; Martin, John; Bankowitz, Richard

    2014-01-01

    The authors developed 8 measures of waste associated with cardiac procedures to assist hospitals in comparing their performance with peer facilities. Measure selection was based on review of the research literature, clinical guidelines, and consultation with key stakeholders. Development and validation used the data from 261 hospitals in a split-sample design. Measures were risk adjusted using Premier's CareScience methodologies or mean peer value based on Medicare Severity Diagnosis-Related Group assignment. High variability was found in resource utilization across facilities. Validation of the measures using item-to-total correlations (range = 0.27-0.78), Cronbach α (.88), and Spearman rank correlation (0.92) showed high reliability and discriminatory power. Because of the level of variability observed among hospitals, this study suggests that there is opportunity for facilities to design successful waste reduction programs targeting cardiac-device procedures.

  4. Performance indicators and indices of sludge management in urban wastewater treatment plants.

    PubMed

    Silva, C; Saldanha Matos, J; Rosa, M J

    2016-12-15

    Sludge (or biosolids) management is highly complex and has a significant cost associated with the biosolids disposal, as well as with the energy and flocculant consumption in the sludge processing units. The sludge management performance indicators (PIs) and indices (PXs) are thus core measures of the performance assessment system developed for urban wastewater treatment plants (WWTPs). The key PIs proposed cover the sludge unit production and dry solids concentration (DS), disposal/beneficial use, quality compliance for agricultural use and costs, whereas the complementary PIs assess the plant reliability and the chemical reagents' use. A key PI was also developed for assessing the phosphorus reclamation, namely through the beneficial use of the biosolids and the reclaimed water in agriculture. The results of a field study with 17 Portuguese urban WWTPs in a 5-year period were used to derive the PI reference values which are neither inherent to the PI formulation nor literature-based. Clusters by sludge type (primary, activated, trickling filter and mixed sludge) and by digestion and dewatering processes were analysed and the reference values for sludge production and dry solids were proposed for two clusters: activated sludge or biofilter WWTPs with primary sedimentation, sludge anaerobic digestion and centrifuge dewatering; activated sludge WWTPs without primary sedimentation and anaerobic digestion and with centrifuge dewatering. The key PXs are computed for the DS after each processing unit and the complementary PXs for the energy consumption and the operating conditions DS-determining. The PX reference values are treatment specific and literature based. The PI and PX system was applied to a WWTP and the results demonstrate that it diagnosis the situation and indicates opportunities and measures for improving the WWTP performance in sludge management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Comparison of time-dependent changes in the surface hardness of different composite resins

    PubMed Central

    Ozcan, Suat; Yikilgan, Ihsan; Uctasli, Mine Betul; Bala, Oya; Kurklu, Zeliha Gonca Bek

    2013-01-01

    Objective: The aim of this study was to evaluate the change in surface hardness of silorane-based composite resin (Filtek Silorane) in time and compare the results with the surface hardness of two methacrylate-based resins (Filtek Supreme and Majesty Posterior). Materials and Methods: From each composite material, 18 wheel-shaped samples (5-mm diameter and 2-mm depth) were prepared. Top and bottom surface hardness of these samples was measured using a Vicker's hardness tester. The samples were then stored at 37°C and 100% humidity. After 24 h and 7, 30 and 90 days, the top and bottom surface hardness of the samples was measured. In each measurement, the rate between the hardness of the top and bottom surfaces were recorded as the hardness rate. Statistical analysis was performed by one-way analysis of variance, multiple comparisons by Tukey's test and binary comparisons by t-test with a significance level of P = 0.05. Results: The highest hardness values were obtained from each two surfaces of Majesty Posterior and the lowest from Filtek Silorane. Both the top and bottom surface hardness of the methacrylate based composite resins was high and there was a statistically significant difference between the top and bottom hardness values of only the silorane-based composite, Filtek Silorane (P < 0.05). The lowest was obtained with Filtek Silorane. The hardness values of all test groups increased after 24 h (P < 0.05). Conclusion: Although silorane-based composite resin Filtek Silorane showed adequate hardness ratio, the use of incremental technic during application is more important than methacrylate based composites. PMID:24966724

  6. The Relationship Between Hospital Value-Based Purchasing Program Scores and Hospital Bond Ratings.

    PubMed

    Rangnekar, Anooja; Johnson, Tricia; Garman, Andrew; O'Neil, Patricia

    2015-01-01

    Tax-exempt hospitals and health systems often borrow long-term debt to fund capital investments. Lenders use bond ratings as a standard metric to assess whether to lend funds to a hospital. Credit rating agencies have historically relied on financial performance measures and a hospital's ability to service debt obligations to determine bond ratings. With the growth in pay-for-performance-based reimbursement models, rating agencies are expanding their hospital bond rating criteria to include hospital utilization and value-based purchasing (VBP) measures. In this study, we evaluated the relationship between the Hospital VBP domains--Clinical Process of Care, Patient Experience of Care, Outcome, and Medicare Spending per Beneficiary (MSPB)--and hospital bond ratings. Given the historical focus on financial performance, we hypothesized that hospital bond ratings are not associated with any of the Hospital VBP domains. This was a retrospective, cross-sectional study of all hospitals that were rated by Moody's for fiscal year 2012 and participated in the Centers for Medicare & Medicaid Services' VBP program as of January 2014 (N = 285). Of the 285 hospitals in the study, 15% had been assigned a bond rating of Aa, and 46% had been assigned an A rating. Using a binary logistic regression model, we found an association between MSPB only and bond ratings, after controlling for other VBP and financial performance scores; however, MSPB did not improve the overall predictive accuracy of the model. Inclusion of VBP scores in the methodology used to determine hospital bond ratings is likely to affect hospital bond ratings in the near term.

  7. Commissioning and quality assurance for the treatment delivery components of the AccuBoost system

    PubMed Central

    Talmadge, Mike; Ladd, Ron; Halvorsen, Per

    2015-01-01

    The objective for this work was to develop a commissioning methodology for the treatment delivery components of the AccuBoost system, as well as to establish a routine quality assurance program and appropriate guidance for clinical use based on the commissioning results. Various tests were developed: 1) assessment of the accuracy of the displayed separation value; 2) validation of the dwell positions within each applicator; 3) assessment of the accuracy and precision of the applicator localization system; 4) assessment of the combined dose profile of two opposed applicators to confirm that they are coaxial; 5) measurement of the absolute dose delivered with each applicator to confirm acceptable agreement with dose based on Monte Carlo modeling; 6) measurements of the skin‐to‐center dose ratio using optically stimulated luminescence dosimeters; and 7) assessment of the mammopad cushion's effect on the center dose. We found that the difference between the measured and the actual paddle separation is <0.1 cm for the separation range of 3 cm to 7.5 cm. Radiochromic film measurements demonstrated that the number of dwell positions inside the applicators agree with the values from the vendor, for each applicator type and size. The shift needed for a good applicator‐grid alignment was within 0.2 cm. The dry‐run test using film demonstrated that the shift of the dosimetric center is within 0.15 cm. Dose measurements in water converted to polystyrene agreed within 5.0% with the Monte Carlo data in polystyrene for the same applicator type, size, and depth. A solid water‐to‐water (phantom) factor was obtained for each applicator, and all future annual quality assurance tests will be performed in solid water using an average value of 1.07 for the solid water‐to‐water factor. The skin‐to‐center dose ratio measurements support the Monte Carlo‐based values within 5.0% agreement. For the treatment separation range of 4 cm to 8 cm, the change in center dose would be <1.0% for all applicators when using a compressed pad of 0.2 cm to 0.3 cm. The tests performed ensured that all treatment components of the AccuBoost system are functional and that a treatment plan can be delivered with acceptable accuracy. Based on the commissioning results, a quality assurance manual and guidance documents for clinical use were developed. PACS numbers: 87.55.Qr, 87.56.Da, 87.90.+y PMID:26103184

  8. Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung

    2017-07-01

    In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.

  9. Evaluation of analytical performance based on partial order methodology.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Reasoning in the Capacity to Make Medical Decisions: The Consideration of Values

    PubMed Central

    Karel, Michele J.; Gurrera, Ronald J.; Hicken, Bret; Moye, Jennifer

    2010-01-01

    Purpose To examine the contribution of “values-based reasoning” in evaluating older adults’ capacity to make medical decisions. Design and Methods Older men with schizophrenia (n=20) or dementia (n=20), and a primary care comparison group (n=19), completed cognitive and psychiatric screening and an interview to determine their capacity to make medical decisions, which included a component on values. All of the participants were receiving treatment at Veterans Administration (VA) outpatient clinics. Results Participants varied widely in the activities and relationships they most valued, the extent to which religious beliefs would influence healthcare decisions, and in ratings of the importance of preserving quality versus length of life. Most participants preferred shared decision making with doctor, family, or both. Individuals with schizophrenia or dementia performed worse than a primary care comparison group in reasoning measured by the ability to list risks and benefits and compare choices. Individuals with dementia performed comparably to the primary care group in reasoning measured by the ability to justify choices in terms of valued abilities or activities, whereas individuals with schizophrenia performed relatively worse compared to the other two groups. Compared to primary care patients, participants with schizophrenia and with dementia were impaired on the ability to explain treatment choices in terms of valued relationships. Conclusion Medical decision making may be influenced by strongly held values and beliefs, emotions, and long life experience. To date, these issues have not been explicitly included in structured evaluations of medical decision-making capacity. This study demonstrated that it is possible to inquire of and elicit a range of healthcare related values and preferences from older adults with dementia or schizophrenia, and individuals with mild to moderate dementia may be able to discuss healthcare options in relation to their values. However, how best to incorporate a values assessment into a structured capacity evaluation deserves further research attention. PMID:20465077

  11. Analysis of genetic association using hierarchical clustering and cluster validation indices.

    PubMed

    Pagnuco, Inti A; Pastore, Juan I; Abras, Guillermo; Brun, Marcel; Ballarin, Virginia L

    2017-10-01

    It is usually assumed that co-expressed genes suggest co-regulation in the underlying regulatory network. Determining sets of co-expressed genes is an important task, based on some criteria of similarity. This task is usually performed by clustering algorithms, where the genes are clustered into meaningful groups based on their expression values in a set of experiment. In this work, we propose a method to find sets of co-expressed genes, based on cluster validation indices as a measure of similarity for individual gene groups, and a combination of variants of hierarchical clustering to generate the candidate groups. We evaluated its ability to retrieve significant sets on simulated correlated and real genomics data, where the performance is measured based on its detection ability of co-regulated sets against a full search. Additionally, we analyzed the quality of the best ranked groups using an online bioinformatics tool that provides network information for the selected genes. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  13. Quantifying N2O reduction to N2 based on N2O isotopocules - validation with independent methods (helium incubation and 15N gas flux method)

    NASA Astrophysics Data System (ADS)

    Lewicka-Szczebak, Dominika; Augustin, Jürgen; Giesemann, Anette; Well, Reinhard

    2017-02-01

    Stable isotopic analyses of soil-emitted N2O (δ15Nbulk, δ18O and δ15Nsp = 15N site preference within the linear N2O molecule) may help to quantify N2O reduction to N2, an important but rarely quantified process in the soil nitrogen cycle. The N2O residual fraction (remaining unreduced N2O, rN2O) can be theoretically calculated from the measured isotopic enrichment of the residual N2O. However, various N2O-producing pathways may also influence the N2O isotopic signatures, and hence complicate the application of this isotopic fractionation approach. Here this approach was tested based on laboratory soil incubations with two different soil types, applying two reference methods for quantification of rN2O: helium incubation with direct measurement of N2 flux and the 15N gas flux method. This allowed a comparison of the measured rN2O values with the ones calculated based on isotopic enrichment of residual N2O. The results indicate that the performance of the N2O isotopic fractionation approach is related to the accompanying N2O and N2 source processes and the most critical is the determination of the initial isotopic signature of N2O before reduction (δ0). We show that δ0 can be well determined experimentally if stable in time and then successfully applied for determination of rN2O based on δ15Nsp values. Much more problematic to deal with are temporal changes of δ0 values leading to failure of the approach based on δ15Nsp values only. For this case, we propose here a dual N2O isotopocule mapping approach, where calculations are based on the relation between δ18O and δ15Nsp values. This allows for the simultaneous estimation of the N2O-producing pathways' contribution and the rN2O value.

  14. Spectroscopic investigation, HOMO-LUMO and NLO studies on L-histidinium maleate based on DFT approach

    NASA Astrophysics Data System (ADS)

    Dhanavel, S.; Stephen, A.; Asirvatham, P. Samuel

    2017-05-01

    The molecular structure of the title compound L-Histidinium Maleate (LHM) was constructed and optimized based on Density Functional Theory method (DFT-B3LYP) with the 6-31G (d,p) basis set. The fundamental vibrational spectral assignment was analyzed with the aid of optimized structure of LHM. The study on electronic properties such as, HOMO-LUMO energies and absorption wavelength were performed using Time dependent DFT (TD-DFT) approach which reveals that energy transfer occur within the molecule. 13C NMR chemical shift values were measured using Gauge independent atomic orbital method (GIAO) and the obtained values are in good agreement with the reported experimental values. Hardness, ionization potential and electrophilicity index also calculated. The electric dipole moment (μtot) and hyperpolarizability (βtot) values of the investigated molecules were computed. The calculated value (β) was 3.7 times higher than that of urea, which confirms the LHM molecule is a potential candidate for NLO applications.

  15. Combining Time-Driven Activity-Based Costing with Clinical Outcome in Cost-Effectiveness Analysis to Measure Value in Treatment of Depression

    PubMed Central

    Lindefors, Nils

    2016-01-01

    Background A major challenge of mental health care is to provide safe and effective treatment with limited resources. The main purpose of this study was to examine a value-based approach in clinical psychiatry when evaluating a process improvement initiative. This was accomplished by using the relatively new time driven activity based costing (TDABC) method within the more widely adopted cost-effectiveness analysis framework for economic evaluation of healthcare technologies. The objective was to evaluate the cost-effectiveness of allowing psychologists to perform post-treatment assessment previously performed by psychiatrists at an outpatient clinic treating depression using internet-based cognitive-behavioral therapy (ICBT). Methods Data was collected from 568 adult patients treated with ICBT for depression during 2013–2014. The TDABC methodology was used to estimate total healthcare costs, including development of process maps for the complete cycle of care and estimation of resource use and minute costs of staff, hospital space and materials based on their relative proportions used. Clinical outcomes were measured using the Patient Health Questionnaire depression scale (PHQ-9) before and after treatment and at 6-month follow-up. Cost-effectiveness analyses (CEA) was performed and the results presented as incremental net benefits (INB), cost-effectiveness acceptability curves (CEACs) and confidence ellipses to demonstrate uncertainty around the value of the organizational intervention. Outcomes Taking into account the complete healthcare process (from referral to follow-up assessment), treatment costs decreased from $709 (SD = $130) per patient in 2013 to $659 (SD = $134) in 2014 while treatment effectiveness was maintained; 27% had achieved full remission from depression after treatment (PHQ-9 < 5) during both 2013 and 2014 and an additional 35% and 33% had achieved partial remission in 2013 and 2014, respectively. At follow-up, 42% were in full remission after treatment during both 2013 and 2014; an additional 35% and 33% were in partial remission during 2013 and 2014, respectively. Confidence ellipses occupied the south-east (SE) and south-west (SW) quadrants of the incremental cost-effectiveness plane at both post-treatment and at follow-up, indicating that the ICBT treatment was less costly and equally effective after staff reallocation. Conclusion Treating patients to the target of full remission using psychologists instead of medical specialists for post-treatment assessment is cost-saving and consequently a more valuable use of limited resources. TDABC may be a useful tool for measuring resource costs, identifying quality improvement opportunities and evaluating the consequences of such initiatives. Combining TDABC with clinical outcome measures in CEA is potentially a useful approach in mental healthcare to estimate the value of process improvement initiatives. PMID:27798655

  16. Combining Time-Driven Activity-Based Costing with Clinical Outcome in Cost-Effectiveness Analysis to Measure Value in Treatment of Depression.

    PubMed

    El Alaoui, Samir; Lindefors, Nils

    2016-01-01

    A major challenge of mental health care is to provide safe and effective treatment with limited resources. The main purpose of this study was to examine a value-based approach in clinical psychiatry when evaluating a process improvement initiative. This was accomplished by using the relatively new time driven activity based costing (TDABC) method within the more widely adopted cost-effectiveness analysis framework for economic evaluation of healthcare technologies. The objective was to evaluate the cost-effectiveness of allowing psychologists to perform post-treatment assessment previously performed by psychiatrists at an outpatient clinic treating depression using internet-based cognitive-behavioral therapy (ICBT). Data was collected from 568 adult patients treated with ICBT for depression during 2013-2014. The TDABC methodology was used to estimate total healthcare costs, including development of process maps for the complete cycle of care and estimation of resource use and minute costs of staff, hospital space and materials based on their relative proportions used. Clinical outcomes were measured using the Patient Health Questionnaire depression scale (PHQ-9) before and after treatment and at 6-month follow-up. Cost-effectiveness analyses (CEA) was performed and the results presented as incremental net benefits (INB), cost-effectiveness acceptability curves (CEACs) and confidence ellipses to demonstrate uncertainty around the value of the organizational intervention. Taking into account the complete healthcare process (from referral to follow-up assessment), treatment costs decreased from $709 (SD = $130) per patient in 2013 to $659 (SD = $134) in 2014 while treatment effectiveness was maintained; 27% had achieved full remission from depression after treatment (PHQ-9 < 5) during both 2013 and 2014 and an additional 35% and 33% had achieved partial remission in 2013 and 2014, respectively. At follow-up, 42% were in full remission after treatment during both 2013 and 2014; an additional 35% and 33% were in partial remission during 2013 and 2014, respectively. Confidence ellipses occupied the south-east (SE) and south-west (SW) quadrants of the incremental cost-effectiveness plane at both post-treatment and at follow-up, indicating that the ICBT treatment was less costly and equally effective after staff reallocation. Treating patients to the target of full remission using psychologists instead of medical specialists for post-treatment assessment is cost-saving and consequently a more valuable use of limited resources. TDABC may be a useful tool for measuring resource costs, identifying quality improvement opportunities and evaluating the consequences of such initiatives. Combining TDABC with clinical outcome measures in CEA is potentially a useful approach in mental healthcare to estimate the value of process improvement initiatives.

  17. Statistical analysis of electromagnetic radiation measurements in the vicinity of GSM/UMTS base station installed on buildings in Serbia.

    PubMed

    Koprivica, Mladen; Slavkovic, Vladimir; Neskovic, Natasa; Neskovic, Aleksandar

    2016-03-01

    As a result of dense deployment of public mobile base stations, additional electromagnetic (EM) radiation occurs in the modern human environment. At the same time, public concern about the exposure to EM radiation emitted by such sources has increased. In order to determine the level of radio frequency radiation generated by base stations, extensive EM field strength measurements were carried out for 664 base station locations, from which 276 locations refer to the case of base stations with antenna system installed on buildings. Having in mind the large percentage (42 %) of locations with installations on buildings, as well as the inevitable presence of people in their vicinity, a detailed analysis of this location category was performed. Measurement results showed that the maximum recorded value of total electric field strength has exceeded International Commission on Non-Ionizing Radiation Protection general public exposure reference levels at 2.5 % of locations and Serbian national reference levels at 15.6 % of locations. It should be emphasised that the values exceeding the reference levels were observed only outdoor, while in indoor total electric field strength in no case exceeded the defined reference levels. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Design, construction and mechanical testing of digital 3D anatomical data-based PCL-HA bone tissue engineering scaffold.

    PubMed

    Yao, Qingqiang; Wei, Bo; Guo, Yang; Jin, Chengzhe; Du, Xiaotao; Yan, Chao; Yan, Junwei; Hu, Wenhao; Xu, Yan; Zhou, Zhi; Wang, Yijin; Wang, Liming

    2015-01-01

    The study aims to investigate the techniques of design and construction of CT 3D reconstructional data-based polycaprolactone (PCL)-hydroxyapatite (HA) scaffold. Femoral and lumbar spinal specimens of eight male New Zealand white rabbits were performed CT and laser scanning data-based 3D printing scaffold processing using PCL-HA powder. Each group was performed eight scaffolds. The CAD-based 3D printed porous cylindrical stents were 16 piece × 3 groups, including the orthogonal scaffold, the Pozi-hole scaffold and the triangular hole scaffold. The gross forms, fiber scaffold diameters and porosities of the scaffolds were measured, and the mechanical testing was performed towards eight pieces of the three kinds of cylindrical scaffolds, respectively. The loading force, deformation, maximum-affordable pressure and deformation value were recorded. The pore-connection rate of each scaffold was 100 % within each group, there was no significant difference in the gross parameters and micro-structural parameters of each scaffold when compared with the design values (P > 0.05). There was no significant difference in the loading force, deformation and deformation value under the maximum-affordable pressure of the three different cylinder scaffolds when the load was above 320 N. The combination of CT and CAD reverse technology could accomplish the design and manufacturing of complex bone tissue engineering scaffolds, with no significant difference in the impacts of the microstructures towards the physical properties of different porous scaffolds under large load.

  19. Teaching hospital performance: towards a community of shared values?

    PubMed

    Mauro, Marianna; Cardamone, Emma; Cavallaro, Giusy; Minvielle, Etienne; Rania, Francesco; Sicotte, Claude; Trotta, Annarita

    2014-01-01

    This paper explores the performance dimensions of Italian teaching hospitals (THs) by considering the multiple constituent model approach, using measures that are subjective and based on individual ideals and preferences. Our research replicates a study of a French TH and deepens it by adjusting it to the context of an Italian TH. The purposes of this research were as follows: to identify emerging views on the performance of teaching hospitals and to analyze how these views vary among hospital stakeholders. We conducted an in-depth case study of a TH using a quantitative survey method. The survey uses a questionnaire based on Parsons' social system action theory, which embraces the major models of organizational performance and covers three groups of internal stakeholders: physicians, caregivers and administrative staff. The questionnaires were distributed between April and September 2011. The results confirm that hospital performance is multifaceted and includes the dimensions of efficiency, effectiveness and quality of care, as well as organizational and human features. There is a high degree of consensus among all observed stakeholder groups about these values, and a shared view of performance is emerging. Our research provides useful information for defining management priorities to improve the performance of THs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Novel monorail infusion catheter for volumetric coronary blood flow measurement in humans: in vitro validation.

    PubMed

    van 't Veer, Marcel; Adjedj, Julien; Wijnbergen, Inge; Tóth, Gabor G; Rutten, Marcel C M; Barbato, Emanuele; van Nunen, Lokien X; Pijls, Nico H J; De Bruyne, Bernard

    2016-08-20

    The aim of this study is to validate a novel monorail infusion catheter for thermodilution-based quantitative coronary flow measurements. Based on the principles of thermodilution, volumetric coronary flow can be determined from the flow rate of a continuous saline infusion, the temperature of saline when it enters the coronary artery, and the temperature of the blood mixed with the saline in the distal part of the coronary artery. In an in vitro set-up of the systemic and coronary circulation at body temperature, coronary flow values were varied from 50-300 ml/min in steps of 50 ml/min. At each coronary flow value, thermodilution-based measurements were performed at infusion rates of 15, 20, and 30 ml/min. Temperatures and pressures were simultaneously measured with a pressure/temperature sensor-tipped guidewire. Agreement of the calculated flow and the measured flow as well as repeatability were assessed. A total of five catheters were tested, with a total of 180 measurements. A strong correlation (ρ=0.976, p<0.0001) and a difference of -6.5±15.5 ml/min were found between measured and calculated flow. The difference between two repeated measures was 0.2%±8.0%. This novel infusion catheter used in combination with a pressure/temperature sensor-tipped guidewire allows accurate and repeatable absolute coronary flow measurements. This opens a window to a better understanding of the coronary microcirculation.

  1. Added value in health care with six sigma.

    PubMed

    Lenaz, Maria P

    2004-06-01

    Six sigma is the structured application of the tools and techniques of quality management applied on a project basis that can enable organizations to achieve superior performance and strategic business results. The Greek character sigma has been used as a statistical term that measures how much a process varies from perfection, based on the number of defects per million units. Health care organizations using this model proceed from the lower levels of quality performance to the highest level, in which the process is nearly error free.

  2. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    PubMed

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  3. Resting state signatures of domain and demand-specific working memory performance.

    PubMed

    van Dam, Wessel O; Decker, Scott L; Durbin, Jeffery S; Vendemia, Jennifer M C; Desai, Rutvik H

    2015-09-01

    Working memory (WM) is one of the key constructs in understanding higher-level cognition. We examined whether patterns of activity in the resting state of individual subjects are correlated with their off-line working and short-term memory capabilities. Participants completed a resting-state fMRI scan and off-line working and short-term memory (STM) tests with both verbal and visual materials. We calculated fractional amplitude of low frequency fluctuations (fALFF) from the resting state data, and also computed connectivity between seeds placed in frontal and parietal lobes. Correlating fALFF values with behavioral measures showed that the fALFF values in a widespread fronto-parietal network during rest were positively correlated with a combined memory measure. In addition, STM showed a significant correlation with fALFF within the right angular gyrus and left middle occipital gyrus, whereas WM was correlated with fALFF values within the right IPS and left dorsomedial cerebellar cortex. Furthermore, verbal and visuospatial memory capacities were associated with dissociable patterns of low-frequency fluctuations. Seed-based connectivity showed correlations with the verbal WM measure in the left hemisphere, and with the visual WM measure in the right hemisphere. These findings contribute to our understanding of how differences in spontaneous low-frequency fluctuations at rest are correlated with differences in cognitive performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Experimental investigation and modelling of surface roughness and resultant cutting force in hard turning of AISI H13 Steel

    NASA Astrophysics Data System (ADS)

    Boy, M.; Yaşar, N.; Çiftçi, İ.

    2016-11-01

    In recent years, turning of hardened steels has replaced grinding for finishing operations. This process is compared to grinding operations; hard turning has higher material removal rates, the possibility of greater process flexibility, lower equipment costs, and shorter setup time. CBN or ceramic cutting tools are widely used hard part machining. For successful application of hard turning, selection of suitable cutting parameters for a given cutting tool is an important step. For this purpose, an experimental investigation was conducted to determine the effects of cutting tool edge geometry, feed rate and cutting speed on surface roughness and resultant cutting force in hard turning of AISI H13 steel with ceramic cutting tools. Machining experiments were conducted in a CNC lathe based on Taguchi experimental design (L16) in different levels of cutting parameters. In the experiments, a Kistler 9257 B, three cutting force components (Fc, Ff and Fr) piezoelectric dynamometer was used to measure cutting forces. Surface roughness measurements were performed by using a Mahrsurf PS1 device. For statistical analysis, analysis of variance has been performed and mathematical model have been developed for surface roughness and resultant cutting forces. The analysis of variance results showed that the cutting edge geometry, cutting speed and feed rate were the most significant factors on resultant cutting force while the cutting edge geometry and feed rate were the most significant factor for the surface roughness. The regression analysis was applied to predict the outcomes of the experiment. The predicted values and measured values were very close to each other. Afterwards a confirmation tests were performed to make a comparison between the predicted results and the measured results. According to the confirmation test results, measured values are within the 95% confidence interval.

  5. Towards metering tap water by Lorentz force velocimetry

    NASA Astrophysics Data System (ADS)

    Vasilyan, Suren; Ebert, Reschad; Weidner, Markus; Rivero, Michel; Halbedel, Bernd; Resagk, Christian; Fröhlich, Thomas

    2015-11-01

    In this paper, we present enhanced flow rate measurement by applying the contactless Lorentz Force Velocimetry (LFV) technique. Particularly, we show that the LFV is a feasible technique for metering the flow rate of salt water in a rectangular channel. The measurements of the Lorentz forces as a function of the flow rate are presented for different electrical conductivities of the salt water. The smallest value of conductivity is achieved at 0.06 S·m-1, which corresponds to the typical value of tap water. In comparison with previous results, the performance of LFV is improved by approximately 2 orders of magnitude by means of a high-precision differential force measurement setup. Furthermore, the sensitivity curve and the calibration factor of the flowmeter are provided based on extensive measurements for the flow velocities ranging from 0.2 to 2.5 m·s-1 and conductivities ranging from 0.06 to 10 S·m-1.

  6. 1.9 K Heat Inleak and Resistive Heating Measurements on Lhc Cryomagnets

    NASA Astrophysics Data System (ADS)

    Ferlin, G.; Claudet, S.; Tavian, L.; Wagner, U.

    2010-04-01

    The superconducting magnets of the Large Hadron Collider (LHC) distributed over eight sectors of 3.3-km long are cooled at 1.9 K in pressurized superfluid helium. During the commissioning campaign of the sectors in 2008, cold standby periods at nominal operating temperature have allowed to measure the overall static heat inleaks reaching the magnet cold masses at 1.9 K by enthalpy balance in steady-state operation. In addition, during electrical powering of the different magnet circuits, helium II calorimetry based on precision thermometry has been implemented to assess with an accuracy of 100 mW/m the additional heat loads due to resistive heating and to detect possible abnormal heat dissipation during powering. This paper describes the method applied to perform these measurements, compares the results with the expected specified values and discusses the impact of the measured values on cryo-plant tuning and operational margins.

  7. A comparison study of different facial soft tissue analysis methods.

    PubMed

    Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun

    2014-07-01

    The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  8. Determination of the Spatial Distribution in Hydraulic Conductivity Using Genetic Algorithm Optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Lee, J. H.; Kitanidis, P. K.

    2016-12-01

    Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.

  9. Galileo: The Added Value for Integrity in Harsh Environments.

    PubMed

    Borio, Daniele; Gioia, Ciro

    2016-01-16

    A global navigation satellite system (GNSS)-based navigation is a challenging task in a signal-degraded environments where GNSS signals are distorted by multipath and attenuated by fading effects: the navigation solution may be inaccurate or unavailable. A possible approach to improve accuracy and availability is the joint use of measurements from different GNSSs and quality check algorithms; this approach is investigated here using live GPS and Galileo signals. A modified receiver autonomous integrity monitoring (RAIM) algorithm, including geometry and separability checks, is proposed to detect and exclude erroneous measurements: the multi-constellation approach provides redundant measurements, and RAIM exploits them to exclude distorted observations. The synergy between combined GPS/Galileo navigation and RAIM is analyzed using live data; the performance is compared to the accuracy and availability of a GPS-only solution. The tests performed demonstrate that the methods developed are effective techniques for GNSS-based navigation in signal-degraded environments. The joint use of the multi-constellation approach and of modified RAIM algorithms improves the performance of the navigation system in terms of both accuracy and availability.

  10. Galileo: The Added Value for Integrity in Harsh Environments

    PubMed Central

    Borio, Daniele; Gioia, Ciro

    2016-01-01

    A global navigation satellite system (GNSS)-based navigation is a challenging task in a signal-degraded environments where GNSS signals are distorted by multipath and attenuated by fading effects: the navigation solution may be inaccurate or unavailable. A possible approach to improve accuracy and availability is the joint use of measurements from different GNSSs and quality check algorithms; this approach is investigated here using live GPS and Galileo signals. A modified receiver autonomous integrity monitoring (RAIM) algorithm, including geometry and separability checks, is proposed to detect and exclude erroneous measurements: the multi-constellation approach provides redundant measurements, and RAIM exploits them to exclude distorted observations. The synergy between combined GPS/Galileo navigation and RAIM is analyzed using live data; the performance is compared to the accuracy and availability of a GPS-only solution. The tests performed demonstrate that the methods developed are effective techniques for GNSS-based navigation in signal-degraded environments. The joint use of the multi-constellation approach and of modified RAIM algorithms improves the performance of the navigation system in terms of both accuracy and availability. PMID:26784205

  11. Accuracy of Cirrus HD-OCT and Topcon SP-3000P for measuring central corneal thickness.

    PubMed

    Calvo-Sanz, Jorge A; Ruiz-Alcocer, Javier; Sánchez-Tena, Miguel A

    2017-02-18

    To compare and analyze the interchangeability of three measuring systems, each based on a different technique, for central corneal thickness (CCT) analysis. CCT measurements were measured using optical coherence tomography (OCT), non-contact specular microscopy (NCSM), and ultrasonic pachymetry (USP) in 60 eyes of 60 healthy patients with a mean age of 66.5±15.0 years and a mean spherical equivalent of 0.43±1.14 D. Analysis of variations in measurement concordance and correlation among the three different methods were performed. Comparison of CCT measurements were done using Bland-Altman plots (with bias and 95% confidence intervals), intraclass correlation coefficient (ICC), and paired t-student analysis. Mean CCT values were: 549.20±26.91μm for USP (range 503-618μm), 514.20±27.49μm for NCSM (range 456-586μm) and 542.80±25.56μm for OCT (range 486-605μm). CCT values obtained with NCMS were significantly lower than those obtained with OCT and USP methods. NCMS CCT value was 36.08±10.72μm lower than USP value (p<0.05), and NCMS CCT value was 7.88±8.86μm lower than OCT value (p<0.05). ICC between USP-NCSM pair was 0.488 and 0.909 between USP-OCT pair. OCT and UPS offered highly comparable results, whereas NCSM offered lower mean CCT values compared to the other two methods. Therefore, NCSM should not be considered a reliable method for measuring CCT and should rather be considered for assessing longitudinal changes in the same patient. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tommasi, J.; Ruggieri, J. M.; Lebrat, J. F.

    The latest release (2.1) of the ERANOS code system, using JEF-2.2, JEFF-3.1 and ENDF/B-VI r8 multigroup cross-section libraries is currently being validated on fast reactor critical experiments at CEA-Cadarache (France). This paper briefly presents the library effect studies and the detailed best-estimate validation studies performed up to now as part of the validation process. The library effect studies are performed over a wide range of experimental configurations, using simple model and method options. They yield global trends about the shift from JEF-2.2 to JEFF-3.1 cross-section libraries, that can be related to individual sensitivities and cross-section changes. The more detailed, best-estimate,more » calculations have been performed up to now over three experimental configurations carried out in the MASURCA critical facility at CEA-Cadarache: two cores with a softened spectrum due to large amounts of graphite (MAS1A' and MAS1B), and a core representative of sodium-cooled fast reactors (CIRANO ZONA2A). Calculated values have been compared to measurements, and discrepancies analyzed in detail using perturbation theory. Values calculated with JEFF-3.1 were found to be within 3 standard deviations of the measured values, and at least of the same quality as the JEF-2.2 based results. (authors)« less

  13. Nurse Value-Added and Patient Outcomes in Acute Care

    PubMed Central

    Yakusheva, Olga; Lindrooth, Richard; Weiss, Marianne

    2014-01-01

    Objective The aims of the study were to (1) estimate the relative nurse effectiveness, or individual nurse value-added (NVA), to patients’ clinical condition change during hospitalization; (2) examine nurse characteristics contributing to NVA; and (3) estimate the contribution of value-added nursing care to patient outcomes. Data Sources/Study Setting Electronic data on 1,203 staff nurses matched with 7,318 adult medical–surgical patients discharged between July 1, 2011 and December 31, 2011 from an urban Magnet-designated, 854-bed teaching hospital. Study Design Retrospective observational longitudinal analysis using a covariate-adjustment value-added model with nurse fixed effects. Data Collection/Extraction Methods Data were extracted from the study hospital's electronic patient records and human resources databases. Principal Findings Nurse effects were jointly significant and explained 7.9 percent of variance in patient clinical condition change during hospitalization. NVA was positively associated with having a baccalaureate degree or higher (0.55, p = .04) and expertise level (0.66, p = .03). NVA contributed to patient outcomes of shorter length of stay and lower costs. Conclusions Nurses differ in their value-added to patient outcomes. The ability to measure individual nurse relative value-added opens the possibility for development of performance metrics, performance-based rankings, and merit-based salary schemes to improve patient outcomes and reduce costs. PMID:25256089

  14. Memory capacity, selective control, and value-directed remembering in children with and without attention-deficit/hyperactivity disorder (ADHD).

    PubMed

    Castel, Alan D; Lee, Steve S; Humphreys, Kathryn L; Moore, Amy N

    2011-01-01

    The ability to select what is important to remember, to attend to this information, and to recall high-value items leads to the efficient use of memory. The present study examined how children with and without attention-deficit/hyperactivity disorder (ADHD) performed on an incentive-based selectivity task in which to-be-remembered items were worth different point values. Participants were 6-9 year old children with ADHD (n = 57) and without ADHD (n = 59). Using a selectivity task, participants studied words paired with point values and were asked to maximize their score, which was the overall value of the items they recalled. This task allows for measures of memory capacity and the ability to selectively remember high-value items. Although there were no significant between-groups differences in the number of words recalled (memory capacity), children with ADHD were less selective than children in the control group in terms of the value of the items they recalled (control of memory). All children recalled more high-value items than low-value items and showed some learning with task experience, but children with ADHD Combined type did not efficiently maximize memory performance (as measured by a selectivity index) relative to children with ADHD Inattentive type and healthy controls, who did not differ significantly from one another. Children with ADHD Combined type exhibit impairments in the strategic and efficient encoding and recall of high-value items. The findings have implications for theories of memory dysfunction in childhood ADHD and the key role of metacognition, cognitive control, and value-directed remembering when considering the strategic use of memory. (c) 2010 APA, all rights reserved

  15. Financial Incentives and Physician Practice Participation in Medicare's Value-Based Reforms.

    PubMed

    Markovitz, Adam A; Ramsay, Patricia P; Shortell, Stephen M; Ryan, Andrew M

    2017-07-26

    To evaluate whether greater experience and success with performance incentives among physician practices are related to increased participation in Medicare's voluntary value-based payment reforms. Publicly available data from Medicare's Physician Compare (n = 1,278; January 2012 to November 2013) and nationally representative physician practice data from the National Survey of Physician Organizations 3 (NSPO3; n = 907,538; 2013). We used regression analysis to examine practice-level relationships between prior exposure to performance incentives and participation in key Medicare value-based payment reforms: accountable care organization (ACO) programs, the Physician Quality Reporting System ("Physician Compare"), and the Meaningful Use of Health Information Technology program ("Meaningful Use"). Prior experience and success with financial incentives were measured as (1) the percentage of practices' revenue from financial incentives for quality or efficiency; and (2) practices' exposure to public reporting of quality measures. We linked physician participation data from Medicare's Physician Compare to the NSPO3 survey. There was wide variation in practices' exposure to performance incentives, with 64 percent exposed to financial incentives, 45 percent exposed to public reporting, and 2.2 percent of practice revenue coming from financial incentives. For each percentage-point increase in financial incentives, there was a 0.9 percentage-point increase in the probability of participating in ACOs (standard error [SE], 0.1, p < .001) and a 0.8 percentage-point increase in the probability of participating in Meaningful Use (SE, 0.1, p < .001), controlling for practice characteristics. Financial incentives were not associated with participation in Physician Compare. Among ACO participants, a 1 percentage-point increase in incentives was associated with a 0.7 percentage-point increase in the probability of being "very well" prepared to utilize cost and quality data (SE, 0.1, p < .001). Physicians organizations' prior experience and success with performance incentives were related to participation in Medicare ACO arrangements and participation in the meaningful use criteria but not to participation in Physician Compare. We conclude that Medicare must complement financial incentives with additional efforts to address the needs of practices with less experience with such incentives to promote value-based payment on a broader scale. © Health Research and Educational Trust.

  16. Reliability, reference values and predictor variables of the ulnar sensory nerve in disease free adults.

    PubMed

    Ruediger, T M; Allison, S C; Moore, J M; Wainner, R S

    2014-09-01

    The purposes of this descriptive and exploratory study were to examine electrophysiological measures of ulnar sensory nerve function in disease free adults to determine reliability, determine reference values computed with appropriate statistical methods, and examine predictive ability of anthropometric variables. Antidromic sensory nerve conduction studies of the ulnar nerve using surface electrodes were performed on 100 volunteers. Reference values were computed from optimally transformed data. Reliability was computed from 30 subjects. Multiple linear regression models were constructed from four predictor variables. Reliability was greater than 0.85 for all paired measures. Responses were elicited in all subjects; reference values for sensory nerve action potential (SNAP) amplitude from above elbow stimulation are 3.3 μV and decrement across-elbow less than 46%. No single predictor variable accounted for more than 15% of the variance in the response. Electrophysiologic measures of the ulnar sensory nerve are reliable. Absent SNAP responses are inconsistent with disease free individuals. Reference values recommended in this report are based on appropriate transformations of non-normally distributed data. No strong statistical model of prediction could be derived from the limited set of predictor variables. Reliability analyses combined with relatively low level of measurement error suggest that ulnar sensory reference values may be used with confidence. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  17. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x., showed the error, 3.5% and 5.0%,.respectively The reference height was assumed as the measurement performed by the tape on the cut tree. The average error of automatic determination of the tree height by the algorithm GNOM based on the TLS point clouds amounted to 6.3% and was slightly higher than when using the manual method of measurements on profiles in the TerraScan (Terrasolid; the error of 5.6%). The relatively high value of the error may be mainly related to the small number of points TLS in the upper parts of crowns. The crown height measurement showed the error of +9.5%. The reference in this case was the tape measurement performed already on the trunks of cut pine trees. Processing the clouds of points by the algorithms GNOM for 16 analyzed trees took no longer than 10 min. (37 sec. /tree). The paper mainly showed the TLS measurement innovation and its high precision in acquiring biometric data in forestry, and at the same time also the further need to increase the degree of automation of processing the clouds of points 3D from terrestrial laser scanning.

  18. Haemoglobin variants may cause significant differences in haemoglobin A1c as measured by high-performance liquid chromatography and enzymatic methods in diabetic patients: a cross-sectional study.

    PubMed

    Otabe, Shuichi; Nakayama, Hitomi; Ohki, Tsuyoshi; Soejima, Eri; Tajiri, Yuji; Yamada, Kentaro

    2017-07-01

    Background We aimed to determine whether the discrepancy between haemoglobin A1c values determined by high-performance liquid chromatography and enzymatic haemoglobin A1c measurements in diabetic patients was clinically relevant. Methods We randomly recruited 1421 outpatients undergoing diabetic treatment and follow-up who underwent at least three haemoglobin A1c measurements between April 2014 and March 2015 at our clinic. In 6369 samples, haemoglobin A1c was simultaneously measured by HA-8160 and MetaboLead (enzymatic assay), and the values were compared. Results haemoglobin A1c measurements by high-performance liquid chromatography and enzymatic assay were strongly correlated (correlation coefficient: 0.9828, linear approximation curve y = 0.9986x - 0.2507). Mean haemoglobin A1c (6.8 ± 1.0%) measured by high-performance liquid chromatography was significantly higher than that measured by enzymatic assay (6.5 ± 1.0%, P < 0.0001). During the sample processing, four (0.3%) subjects presented consistently lower haemoglobin A1c values (<0.7%) by high-performance liquid chromatography than those from enzymatic assay. Of these, three had Hb Toranomon [β112 (G14) Cys→Trp]. The fourth had Hb Ube-2 [α68 (E17) Asn→Asp]. One other subject presented consistently higher haemoglobin A1c values (>1%) by high-performance liquid chromatography than those from enzymatic assay and was diagnosed with a -77 (T > C) mutation in the δ-globin gene. These unrelated asymptomatic subjects had normal erythrocyte profiles, without anaemia. Conclusions We showed that haemoglobin A1c values measured by high-performance liquid chromatography were significantly higher than those measured by enzymatic assay in diabetic subjects. However, when an oversized deviation (>0.7%) between glycaemic control status and haemoglobin A1c is apparent, clinicians should check the methods used to measure haemoglobin A1c and consider the possible presence of a haemoglobin variant.

  19. Preparation and Characterization of Biomass-Derived Advanced Carbon Materials for Lithium-Ion Battery Applications

    NASA Astrophysics Data System (ADS)

    Hardiansyah, Andri; Chaldun, Elsy Rahimi; Nuryadin, Bebeh Wahid; Fikriyyah, Anti Khoerul; Subhan, Achmad; Ghozali, Muhammad; Purwasasmita, Bambang Sunendar

    2018-04-01

    In this study, carbon-based advanced materials for lithium-ion battery applications were prepared by using soybean waste-based biomass material, through a straightforward process of heat treatment followed by chemical modification processes. Various types of carbon-based advanced materials were developed. Physicochemical characteristics and electrochemical performance of the resultant materials were characterized systematically. Scanning electron microscopy observation revealed that the activated carbon and graphene exhibits wrinkles structures and porous morphology. Electrochemical impedance spectroscopy (EIS) revealed that both activated carbon and graphene-based material exhibited a good conductivity. For instance, the graphene-based material exhibited equivalent series resistance value of 25.9 Ω as measured by EIS. The graphene-based material also exhibited good reversibility and cyclic performance. Eventually, it would be anticipated that the utilization of soybean waste-based biomass material, which is conforming to the principles of green materials, could revolutionize the development of advanced material for high-performance energy storage applications, especially for lithium-ion batteries application.

  20. Preparation and Characterization of Biomass-Derived Advanced Carbon Materials for Lithium-Ion Battery Applications

    NASA Astrophysics Data System (ADS)

    Hardiansyah, Andri; Chaldun, Elsy Rahimi; Nuryadin, Bebeh Wahid; Fikriyyah, Anti Khoerul; Subhan, Achmad; Ghozali, Muhammad; Purwasasmita, Bambang Sunendar

    2018-07-01

    In this study, carbon-based advanced materials for lithium-ion battery applications were prepared by using soybean waste-based biomass material, through a straightforward process of heat treatment followed by chemical modification processes. Various types of carbon-based advanced materials were developed. Physicochemical characteristics and electrochemical performance of the resultant materials were characterized systematically. Scanning electron microscopy observation revealed that the activated carbon and graphene exhibits wrinkles structures and porous morphology. Electrochemical impedance spectroscopy (EIS) revealed that both activated carbon and graphene-based material exhibited a good conductivity. For instance, the graphene-based material exhibited equivalent series resistance value of 25.9 Ω as measured by EIS. The graphene-based material also exhibited good reversibility and cyclic performance. Eventually, it would be anticipated that the utilization of soybean waste-based biomass material, which is conforming to the principles of green materials, could revolutionize the development of advanced material for high-performance energy storage applications, especially for lithium-ion batteries application.

  1. To t-Test or Not to t-Test? A p-Values-Based Point of View in the Receiver Operating Characteristic Curve Framework.

    PubMed

    Vexler, Albert; Yu, Jihnhee

    2018-04-13

    A common statistical doctrine supported by many introductory courses and textbooks is that t-test type procedures based on normally distributed data points are anticipated to provide a standard in decision-making. In order to motivate scholars to examine this convention, we introduce a simple approach based on graphical tools of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. In this context, we propose employing a p-values-based method, taking into account the stochastic nature of p-values. We focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we extend the EPV concept to be considered in terms of the ROC curve technique. This provides expressive evaluations and visualizations of a wide spectrum of testing mechanisms' properties. We show that the conventional power characterization of tests is a partial aspect of the presented EPV/ROC technique. We desire that this explanation of the EPV/ROC approach convinces researchers of the usefulness of the EPV/ROC approach for depicting different characteristics of decision-making procedures, in light of the growing interest regarding correct p-values-based applications.

  2. Macular pigment optical density measured by heterochromatic modulation photometry.

    PubMed

    Huchzermeyer, Cord; Schlomberg, Juliane; Welge-Lüssen, Ulrich; Berendschot, Tos T J M; Pokorny, Joel; Kremers, Jan

    2014-01-01

    To psychophysically determine macular pigment optical density (MPOD) employing the heterochromatic modulation photometry (HMP) paradigm by estimating 460 nm absorption at central and peripheral retinal locations. For the HMP measurements, two lights (B: 460 nm and R: 660 nm) were presented in a test field and were modulated in counterphase at medium or high frequencies. The contrasts of the two lights were varied in tandem to determine flicker detection thresholds. Detection thresholds were measured for different R:B modulation ratios. The modulation ratio with minimal sensitivity (maximal threshold) is the point of equiluminance. Measurements were performed in 25 normal subjects (11 male, 14 female; age: 30 ± 11 years, mean ± sd) using an eight channel LED stimulator with Maxwellian view optics. The results were compared with those from two published techniques - one based on heterochromatic flicker photometry (Macular Densitometer) and the other on fundus reflectometry (MPR). We were able to estimate MPOD with HMP using a modified theoretical model that was fitted to the HMP data. The resultant MPODHMP values correlated significantly with the MPODMPR values and with the MPODHFP values obtained at 0.25° and 0.5° retinal eccentricity. HMP is a flicker-based method with measurements taken at a constant mean chromaticity and luminance. The data can be well fit by a model that allows all data points to contribute to the photometric equality estimate. Therefore, we think that HMP may be a useful method for MPOD measurements, in basic and clinical vision experiments.

  3. Multi-GNSS signal-in-space range error assessment - Methodology and results

    NASA Astrophysics Data System (ADS)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  4. Error-compensation model for simultaneous measurement of five degrees of freedom motion errors of a rotary axis

    NASA Astrophysics Data System (ADS)

    Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin

    2018-07-01

    This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.

  5. Predicting drug penetration across the blood-brain barrier: comparison of micellar liquid chromatography and immobilized artificial membrane liquid chromatography.

    PubMed

    De Vrieze, Mike; Lynen, Frédéric; Chen, Kai; Szucs, Roman; Sandra, Pat

    2013-07-01

    Several in vitro methods have been tested for their ability to predict drug penetration across the blood-brain barrier (BBB) into the central nervous system (CNS). In this article, the performance of a variety of micellar liquid chromatographic (MLC) methods and immobilized artificial membrane (IAM) liquid chromatographic approaches were compared for a set of 45 solutes. MLC measurements were performed on a C18 column with sodium dodecyl sulfate (SDS), polyoxyethylene (23) lauryl ether (Brij35), or sodium deoxycholate (SDC) as surfactant in the micellar mobile phase. IAM liquid chromatography measurements were performed with Dulbecco's phosphate-buffered saline (DPBS) and methanol as organic modifier in the mobile phase. The corresponding retention and computed descriptor data for each solute were used for construction of models to predict transport across the blood-brain barrier (log BB). All data were correlated with experimental log BB values and the relative performance of the models was studied. SDS-based models proved most suitable for prediction of log BB values, followed closely by a simplified IAM method, in which it could be observed that extrapolation of retention data to 0% modifier in the mobile phase was unnecessary.

  6. On piecewise interpolation techniques for estimating solar radiation missing values in Kedah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu

    2014-12-04

    This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less

  7. Absolute measurement of LDR brachytherapy source emitted power: Instrument design and initial measurements.

    PubMed

    Malin, Martha J; Palmer, Benjamin R; DeWerd, Larry A

    2016-02-01

    Energy-based source strength metrics may find use with model-based dose calculation algorithms, but no instruments exist that can measure the energy emitted from low-dose rate (LDR) sources. This work developed a calorimetric technique for measuring the power emitted from encapsulated low-dose rate, photon-emitting brachytherapy sources. This quantity is called emitted power (EP). The measurement methodology, instrument design and performance, and EP measurements made with the calorimeter are presented in this work. A calorimeter operating with a liquid helium thermal sink was developed to measure EP from LDR brachytherapy sources. The calorimeter employed an electrical substitution technique to determine the power emitted from the source. The calorimeter's performance and thermal system were characterized. EP measurements were made using four (125)I sources with air-kerma strengths ranging from 2.3 to 5.6 U and corresponding EPs of 0.39-0.79 μW, respectively. Three Best Medical 2301 sources and one Oncura 6711 source were measured. EP was also computed by converting measured air-kerma strengths to EPs through Monte Carlo-derived conversion factors. The measured EP and derived EPs were compared to determine the accuracy of the calorimeter measurement technique. The calorimeter had a noise floor of 1-3 nW and a repeatability of 30-60 nW. The calorimeter was stable to within 5 nW over a 12 h measurement window. All measured values agreed with derived EPs to within 10%, with three of the four sources agreeing to within 4%. Calorimeter measurements had uncertainties ranging from 2.6% to 4.5% at the k = 1 level. The values of the derived EPs had uncertainties ranging from 2.9% to 3.6% at the k = 1 level. A calorimeter capable of measuring the EP from LDR sources has been developed and validated for (125)I sources with EPs between 0.43 and 0.79 μW.

  8. Autonomic dysfunction assessed by EZSCAN and subclinical atherosclerosis.

    PubMed

    Sun, Jichao; Zhang, Yinfei; Xu, Baihui; Lv, Xiaofei; Ding, Lin; Chen, Ying; Sun, Wanwan; Lu, Jieli; Xu, Min; Bi, Yufang; Ning, Guang

    2014-09-01

    The present study aimed to explore the association between autonomic dysfunction and measurements of atherosclerosis in a middle-aged and elderly Chinese population. A population-based cross-sectional study was performed in Shanghai, China, from March to August 2010, with 5076 participants included in the analysis. Autonomic function was assessed by a novel EZSCAN test based on sudomotor function analysis. Carotid intima-media thickness (CIMT) was measured using B-mode ultrasonography and brachial-ankle pulse wave velocity (ba-PWV) was measured using an autonomic device. Participants were divided into three groups based on EZSCAN values: Group 1: EZSCAN 0-24; Group 2, EZSCAN 25-49; and Group 3, EZSCAN 50-100. These groups denoted autonomic dysfunction risk groups as follows: no risk, moderate risk and high risk, respectively. The prevalence of elevated CIMT and ba-PWV increased markedly with increasing EZSCAN values (elevated CIMT 7.4%, 17.5%, and 29.7%, elevated ba-PWV 3.2%, 19.7%, and 36.5%, in Groups 1, 2, and 3, respectively; both Ptrend < 0.0001). Logistic regressions revealed that EZSCAN values ≥50 were associated with a non-significantly higher risk of elevated CIMT (odds ratio [OR] = 1.43; 95% confidence interval [CI] 0.98-2.07) and a significantly higher risk of elevated ba-PWV (OR = 2.16; 95% CI 1.25-3.71) compared with EZSCAN values <25, after controlling for conventional risk factors. A higher EZSCAN value (≥50), an index of high autonomic dysfunction risk, was associated with an increased risk of elevated ba-PWV and CIMT. Such associations were partially explained by traditional atherosclerotic risk factors. © 2014 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.

  9. Improved prediction of residue flexibility by embedding optimized amino acid grouping into RSA-based linear models.

    PubMed

    Zhang, Hua; Kurgan, Lukasz

    2014-12-01

    Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.

  10. Assessing value-based health care delivery for haemodialysis.

    PubMed

    Parra, Eduardo; Arenas, María Dolores; Alonso, Manuel; Martínez, María Fernanda; Gamen, Ángel; Aguarón, Juan; Escobar, María Teresa; Moreno-Jiménez, José María; Alvarez-Ude, Fernando

    2017-06-01

    Disparities in haemodialysis outcomes among centres have been well-documented. Besides, attempts to assess haemodialysis results have been based on non-comprehensive methodologies. This study aimed to develop a comprehensive methodology for assessing haemodialysis centres, based on the value of health care. The value of health care is defined as the patient benefit from a specific medical intervention per monetary unit invested (Value = Patient Benefit/Cost). This study assessed the value of health care and ranked different haemodialysis centres. A nephrology quality management group identified the criteria for the assessment. An expert group composed of stakeholders (patients, clinicians and managers) agreed on the weighting of each variable, considering values and preferences. Multi-criteria methodology was used to analyse the data. Four criteria and their weights were identified: evidence-based clinical performance measures = 43 points; yearly mortality = 27 points; patient satisfaction = 13 points; and health-related quality of life = 17 points (100-point scale). Evidence-based clinical performance measures included five sub-criteria, with respective weights, including: dialysis adequacy; haemoglobin concentration; mineral and bone disorders; type of vascular access; and hospitalization rate. The patient benefit was determined from co-morbidity-adjusted results and corresponding weights. The cost of each centre was calculated as the average amount expended per patient per year. The study was conducted in five centres (1-5). After adjusting for co-morbidity, value of health care was calculated, and the centres were ranked. A multi-way sensitivity analysis that considered different weights (10-60% changes) and costs (changes of 10% in direct and 30% in allocated costs) showed that the methodology was robust. The rankings: 4-5-3-2-1 and 4-3-5-2-1 were observed in 62.21% and 21.55%, respectively, of simulations, when weights were varied by 60%. Value assessments may integrate divergent stakeholder perceptions, create a context for improvement and aid in policy-making decisions. © 2015 John Wiley & Sons, Ltd.

  11. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  12. Hypoglycemia alarm enhancement using data fusion.

    PubMed

    Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W

    2010-01-01

    The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.

  13. Monitoring task loading with multivariate EEG measures during complex forms of human-computer interaction

    NASA Technical Reports Server (NTRS)

    Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.

    2001-01-01

    Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.

  14. An investigation of the efficacy of acceptance-based behavioral therapy for academic procrastination.

    PubMed

    Glick, Debra M; Orsillo, Susan M

    2015-04-01

    Procrastination among college students is both prevalent and troublesome, harming both academic performance and physical health. Unfortunately, no "gold standard" intervention exists. Research suggests that psychological inflexibility may drive procrastination. Accordingly, interventions using acceptance and mindfulness methods to increase psychological flexibility may decrease procrastination. This study compared time management and acceptance-based behavioral interventions. College students' predictions of how much assigned reading they should complete were compared to what they did complete. Procrastination, anxiety, psychological flexibility, and academic values were also measured. Although a trend suggested that time management intervention participants completed more reading, no group differences in procrastination were revealed. The acceptance-based behavioral intervention was most effective for participants who highly valued academics. Clinical implications and future research are discussed. (c) 2015 APA, all rights reserved).

  15. Value-Based Reimbursement: Impact of Curtailing Physician Autonomy in Medical Decision Making.

    PubMed

    Gupta, Dipti; Karst, Ingolf; Mendelson, Ellen B

    2016-02-01

    In this article, we define value in the context of reimbursement and explore the effect of shifting reimbursement paradigms on the decision-making autonomy of a women's imaging radiologist. The current metrics used for value-based reimbursement such as report turnaround time are surrogate measures that do not measure value directly. The true measure of a physician's value in medicine is accomplishment of better health outcomes, which, in breast imaging, are best achieved with a physician-patient relationship. Complying with evidence-based medicine, which includes data-driven best clinical practices, a physician's clinical expertise, and the patient's values, will improve our science and preserve the art of medicine.

  16. Evaluation of calibration efficacy under different levels of uncertainty

    DOE PAGES

    Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...

    2014-06-10

    This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less

  17. Prediction of cold and heat patterns using anthropometric measures based on machine learning.

    PubMed

    Lee, Bum Ju; Lee, Jae Chul; Nam, Jiho; Kim, Jong Yeol

    2018-01-01

    To examine the association of body shape with cold and heat patterns, to determine which anthropometric measure is the best indicator for discriminating between the two patterns, and to investigate whether using a combination of measures can improve the predictive power to diagnose these patterns. Based on a total of 4,859 subjects (3,000 women and 1,859 men), statistical analyses using binary logistic regression were performed to assess the significance of the difference and the predictive power of each anthropometric measure, and binary logistic regression and Naive Bayes with the variable selection technique were used to assess the improvement in the predictive power of the patterns using the combined measures. In women, the strongest indicators for determining the cold and heat patterns among anthropometric measures were body mass index (BMI) and rib circumference; in men, the best indicator was BMI. In experiments using a combination of measures, the values of the area under the receiver operating characteristic curve in women were 0.776 by Naive Bayes and 0.772 by logistic regression, and the values in men were 0.788 by Naive Bayes and 0.779 by logistic regression. Individuals with a higher BMI have a tendency toward a heat pattern in both women and men. The use of a combination of anthropometric measures can slightly improve the diagnostic accuracy. Our findings can provide fundamental information for the diagnosis of cold and heat patterns based on body shape for personalized medicine.

  18. Normative values of cognitive and physical function in older adults: findings from the Irish Longitudinal Study on Ageing.

    PubMed

    Kenny, Rose Anne; Coen, Robert F; Frewen, John; Donoghue, Orna A; Cronin, Hilary; Savva, George M

    2013-05-01

    To provide normative values of tests of cognitive and physical function based on a large sample representative of the population of Ireland aged 50 and older. Data were used from the first wave of The Irish Longitudinal Study on Ageing (TILDA), a prospective cohort study that includes a comprehensive health assessment. Health assessment was undertaken at one of two dedicated health assessment centers or in the study participant's home if travel was not practicable. Five thousand eight hundred ninety-seven members of a nationally representative sample of the community-living population of Ireland aged 50 and older. Those with severe cognitive impairment, dementia, or Parkinson's disease were excluded. Measurements included height and weight, normal walking speed, Timed Up-and-Go, handgrip strength, Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), Color Trails Test, and bone mineral density. Normative values were estimated using generalized additive models for location shape and scale (GAMLSS) and are presented as percentiles, means, and standard deviations. Generalized additive models for location shape and scale fit the observed data well for each measure, leading to reliable estimates of normative values. Performance on all tasks decreased with age. Educational attainment was a strong determinant of performance on all cognitive tests. Tests of walking speed were dependent on height. Distribution of body mass index did not change with age, owing to simultaneous declines in weight and height. Normative values were found for tests of many aspects of cognitive and physical function based on a representative sample of the general older Irish population. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.

  19. Student Hits in an Internet-Supported Course: How Can Instructors Use Them and What Do They Mean?

    ERIC Educational Resources Information Center

    Baugher, Dan; Varanelli, Andrew; Weisbord, Ellen

    2003-01-01

    The world of education is changing as Web-based technology and courseware are increasingly used for delivery of course material. In this environment, instructors may need new measures for determining student involvement, and ultimately student performance. This study examines whether hits to a Web site have any value for predicting student…

  20. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  1. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    PubMed

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  2. Direct measurement of mammographic X-ray spectra with a digital CdTe detection system.

    PubMed

    Abbene, Leonardo; Gerardi, Gaetano; Principato, Fabio; Del Sordo, Stefano; Raso, Giuseppe

    2012-01-01

    In this work we present a detection system, based on a CdTe detector and an innovative digital pulse processing (DPP) system, for high-rate X-ray spectroscopy in mammography (1-30 keV). The DPP system performs a height and shape analysis of the detector pulses, sampled and digitized by a 14-bit, 100 MHz ADC. We show the results of the characterization of the detection system both at low and high photon counting rates by using monoenergetic X-ray sources and a nonclinical X-ray tube. The detection system exhibits excellent performance up to 830 kcps with an energy resolution of 4.5% FWHM at 22.1 keV. Direct measurements of clinical molybdenum X-ray spectra were carried out by using a pinhole collimator and a custom alignment device. A comparison with the attenuation curves and the half value layer values, obtained from the measured and simulated spectra, from an ionization chamber and from a solid state dosimeter, also shows the accuracy of the measurements. These results make the proposed detection system a very attractive tool for both laboratory research, calibration of dosimeters and advanced quality controls in mammography.

  3. Proficiency testing of Hb A1c: a 4-year experience in Taiwan and the Asian Pacific region.

    PubMed

    Shiesh, Shu-Chu; Wiedmeyer, Hsiao-Mei; Kao, Jau-Tsuen; Vasikaran, Samuel D; Lopez, Joseph B

    2009-10-01

    The correlation between hemoglobin A(1c) (Hb A(1c)) and risk for complications in diabetic patients heightens the need to measure Hb A(1c) with accuracy. We evaluated the current performance for measuring Hb A(1c) in the Asian and Pacific region by examining data submitted by laboratories participating in the Taiwan proficiency-testing program. Five fresh-pooled blood samples were sent to participating laboratories twice each year. The results were evaluated against target values assigned by the National Glycohemoglobin Standardization Program network laboratories; a passing criterion of +/-7% of the target value was used. Measurement uncertainty at Hb A(1c) concentrations of 7.0% and 8.0% were determined. A total of 276 laboratories from 11 countries took part in the Hb A(1c) survey. At the Hb A(1c) concentrations tested method-specific interlaboratory imprecision (CVs) were 1.1%-13.9% in 2005, 1.3%-10.1% in 2006, 1.2%-8.2% in 2007, and 1.1%-6.1% in 2008. Differences between target values and median values from the commonly used methods ranged from -0.24% to 0.22% Hb A(1c) in 2008. In 2005 83% of laboratories passed the survey, and in 2008 93% passed. At 7.0% Hb A(1c), measurement uncertainty was on average 0.49% Hb A(1c). The use of accuracy-based proficiency testing with stringent quality criteria has improved the performance of Hb A(1c) testing in the Asian and Pacific laboratories during the 4 years of assessment.

  4. Selecting Statistical Procedures for Quality Control Planning Based on Risk Management.

    PubMed

    Yago, Martín; Alcover, Silvia

    2016-07-01

    According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management. © 2016 American Association for Clinical Chemistry.

  5. Prediction of wastewater treatment plants performance based on artificial fish school neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Ruicheng; Li, Chong

    2011-10-01

    A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.

  6. High Performance Photodiode Based on p-Si/Copper Phthalocyanine Heterojunction.

    PubMed

    Zhong, Junkang; Peng, Yingquan; Zheng, Tingcai; Lv, Wenli; Ren, Qiang; Fobao, Huang; Ying, Wang; Chen, Zhen; Tang, Ying

    2016-06-01

    Hybrid organic-inorganic (HOI) photodiodes have both advantages of organic and inorganic materials, including compatibility of traditional Si-based semiconductor technology, low cost, high photosensitivity and high reliability, showing tremendous value in application. Red light sensitive HOI photodiodes based on the p-Si/copper phthalocyanine (CuPc) hetrojunction were fabricated and characterized. The effects of CuPc layer thickness on the performance were investigated, and an optimal layer thickness of around 30 nm was determined. An analytical expression is derived to describe the measured thickness dependence of the saturation photocurrent. For the device with optimal CuPc layer thickness, a photoresponsivity of 0.35 A/W and external quantum efficiency of 70% were obtained at 9 V reverse voltage bias and 655 nm light illumination of 0.451 mW. Furthermore, optical power dependent performances were investigated.

  7. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  8. Theoretical performance analysis of doped optical fibers based on pseudo parameters

    NASA Astrophysics Data System (ADS)

    Karimi, Maryam; Seraji, Faramarz E.

    2010-09-01

    Characterization of doped optical fibers (DOFs) is an essential primary stage for design of DOF-based devices. This paper presents design of novel measurement techniques to determine DOFs parameters using mono-beam propagation in a low-loss medium by generating pseudo parameters for the DOFs. The designed techniques are able to characterize simultaneously the absorption, emission cross-sections (ACS and ECS), and dopant concentration of DOFs. In both the proposed techniques, we assume pseudo parameters for the DOFs instead of their actual values and show that the choice of these pseudo parameters values for design of DOF-based devices, such as erbium-doped fiber amplifier (EDFA), are appropriate and the resulting error is quite negligible when compared with the actual parameters values.Utilization of pseudo ACS and ECS values in design procedure of EDFAs does not require the measurement of background loss coefficient (BLC) and makes the rate equation of the DOFs simple. It is shown that by using the pseudo parameters values obtained by the proposed techniques, the error in the gain of a designed EDFA with a BLC of about 1 dB/km, are about 0.08 dB. It is further indicated that the same scenario holds good for BLC lower than 5 dB/m and higher than 12 dB/m. The proposed characterization techniques have simple procedures and are low cost that can have an advantageous use in manufacturing of the DOFs.

  9. The Surprisingly Low Motivational Power of Future Rewards: Comparing Conventional Money-Based Measures of Discounting with Motivation-Based Measures

    ERIC Educational Resources Information Center

    Ebert, Jane E. J.

    2010-01-01

    Temporal discount rates are often poor predictors of behaviors that we expect will be motivated by the future. The current research suggests this may be because conventional discounting measures are poor measures of the motivational value of future rewards. In six studies, I develop motivation-based measures of the present value (PV) of future…

  10. Integrative missing value estimation for microarray data.

    PubMed

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  11. A measurement of the hadronic branching ratio of the W boson to charm and strange quarks using strange hadrons in the final stage

    NASA Astrophysics Data System (ADS)

    Dallison, Stephen

    A measurement has been made of the partial branching ratio, Rcs, of the W boson into a pair of jets originating from charmed (c) and strange (s) quarks. This was achieved by identifying final state hadrons among the decay products. Events generated using Monte Carlo simulations were used to construct multiplicity distributions for events where the W decays to cs quarks and events where the W decays to non-cs quarks. This was done by counting individually the numbers of K+/-, Ks0 and A candidates in each type of decay. These distributions were used as reference histograms and compared to multiplicity distributions for all hadronic events obtained using OPAL data taken from 1998 to 2000. The information derived from these distributions was used to extract a value of Values of Rcs were measured separately for charged kaons (K+/-), and neutral hadrons (Ks0 + Lambda). The charged kaon analysis was performed twice, once using an artificial neural network and again using a standard cut-based method. The values for the charged kaon and neutral hadron analyses were combined and weighted according to their overall errors. The final value for Rcs was found to be 0.499 +/- 0.060, Where the error represents a combination of the statistical and systematic uncertainties. The measured value of Rcs was used to determine a value for the CKM matrix element |Vcs|. This value was found to be |Vcs| = 0.999 +/- 0.060.

  12. [Nusing-sensitive indicadors: an opportunity for measuring the nurse contribution].

    PubMed

    Planas-Campmany, Carme; Icart-Isern, M Teresa

    2014-01-01

    The measures directed at improving the management and funding of health services that justify the measurement of performance and the purchase of services based on results, have a direct influence on nursing. In this context, concerns about the value and contribution of nursing have been demonstrated worldwide over the last decades. Therefore efforts are being made to ensure that nurses contribute to promote the transformation of health systems. This requires identifying their contribution to the health system and, specifically, in relation to health outcomes. In recent decades, there has been a growing demand to achieve measures which allow nurses to demonstrate and assume responsibility for their contribution. The research and development of nursing-sensitive indicators and results, and its application, provide an opportunity to measure the contribution and professional performance in achieving these set objectives, in order to improve population health. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  13. Influence of surface layer removal of shade guide tabs on the measured color by spectrophotometer and spectroradiometer.

    PubMed

    Kim, Jin-Cheol; Yu, Bin; Lee, Yong-Keun

    2008-12-01

    To determine the changes in color parameters of Vitapan 3D-Master shade guide tabs by a spectrophotometer (SP) or a spectroradiometer (SR), and by the removal of the surface layer of the tabs that was performed to make a flat measuring surface for the SP color measurement. Color of the shade tabs was measured before and after removing the surface layer of the tabs using SP and SR. Correlations between the color parameters between the original (OR) and the surface layer removed (RM) tabs and between the SP and the SR measurements were determined (alpha=0.05). Based on SP, the lightness, chroma, CIE a* and b* values measured after the surface layer removal were higher than those of the original tabs except a few cases. Based on SR, the chroma and CIE a* and b* values measured after surface layer removal were higher than those of the original tabs except a few cases; however, in case of the lightness, the changes varied by the shade designation. Type of instrument influenced the changes in color parameters based on paired t-test (p<0.05). The color parameters of the OR and RM tabs showed correlations based on both SP and SR measurements (r=0.952-0.997 and p<0.01); however, color difference between the SP-RM and SR-OR tabs was in the range of 18.1-27.0 DeltaE(ab)(*) units (mean: 23.3+/-2.2). When the color of tooth-shaped objects is measured with a spectrophotometer or a spectroradiometer, measurement protocols should be specified because color difference by the surface layer removal and the instrument was high.

  14. A genetic analysis of post-weaning feedlot performance and profitability in Bonsmara cattle.

    PubMed

    van der Westhuizen, R R; van der Westhuizen, J; Schoeman, S J

    2009-02-25

    The aim of this study was to identify factors influencing profitability in a feedlot environment and to estimate genetic parameters for and between a feedlot profit function and productive traits measured in growth tests. The heritability estimate of 0.36 for feedlot profitability shows that this trait is genetically inherited and that it can be selected for. The genetic correlations between feedlot profitability and production and efficiency varied from negligible to high. The genetic correlation estimate of -0.92 between feed conversion ratio and feedlot profitability is largely due to the part-whole relationship between these two traits. Consequently, a multiple regression equation was developed to estimate a feed intake value for all performance-tested Bonsmara bulls, which were group fed and whose feed intakes were unknown. These predicted feed intake values enabled the calculation of a post-weaning growth or feedlot profitability value for all tested bulls, even where individual feed intakes were unknown. Subsequently, a feedlot profitability value for each bull was calculated in a favorable economic environment, an average economic environment and in an unfavorable economic environment. The high Pearson and Spearman correlations between the estimate breeding values based on the average economic environment and the other two environments suggested that the average economic environment could be used to calculate estimate breeding values for feedlot profitability. It is therefore not necessary to change the carcass, weaned calf or feed price on a regular basis to allow for possible re-rankings based on estimate breeding values.

  15. Performance regression manager for large scale systems

    DOEpatents

    Faraj, Daniel A.

    2017-10-17

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  16. A terrain-based site characterization map of California with implications for the contiguous United States

    USGS Publications Warehouse

    Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy

    2012-01-01

    We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.

  17. New hesitation-based distance and similarity measures on intuitionistic fuzzy sets and their applications

    NASA Astrophysics Data System (ADS)

    Kang, Yun; Wu, Shunxiang; Cao, Da; Weng, Wei

    2018-03-01

    In this paper, we present new definitions on distance and similarity measures between intuitionistic fuzzy sets (IFSs) by combining with hesitation degree. First, we discuss the limitations in traditional distance and similarity measures, which are caused by the neglect of hesitation degree's influence. Even though a vector-valued similarity measure was proposed, which has two components indicating similarity and hesitation aspects, it still cannot perform well in practical applications because hesitation works only when the values of similarity measures are equal. In order to overcome the limitations, we propose new definitions on hesitation, distance and similarity measures, and research some theorems which satisfy the requirements of the proposed definitions. Meanwhile, we investigate the relationships among hesitation, distance, similarity and entropy of IFSs to verify the consistency of our work and previous research. Finally, we analyse and discuss the advantages and disadvantages of the proposed similarity measure in detail, and then we apply the proposed measures (dH and SH) to deal with pattern recognition problems, and demonstrate that they outperform state-of-the-art distance and similarity measures.

  18. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08...

  19. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08...

  20. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

Top