Sample records for performance level estimation

  1. Smarter Balanced Preliminary Performance Levels: Estimated MAP Scores Corresponding to the Preliminary Performance Levels of the Smarter Balanced Assessment Consortium (Smarter Balanced)

    ERIC Educational Resources Information Center

    Northwest Evaluation Association, 2015

    2015-01-01

    Recently, the Smarter Balanced Assessment Consortium (Smarter Balanced) released a document that established initial performance levels and the associated threshold scale scores for the Smarter Balanced assessment. The report included estimated percentages of students expected to perform at each of the four performance levels, reported by grade…

  2. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  3. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Mothers' and Teachers' Estimations of First Graders' Literacy Level and Their Relation to the Children's Actual Performance in Different SES Groups

    ERIC Educational Resources Information Center

    Korat, Ofra

    2011-01-01

    The relationship between mothers' and teachers' estimations of 60 children's literacy level and their actual performance were investigated in two different socio-economic status (SES) groups: low (LSES) and high (HSES). The children's reading (fluency, accuracy and comprehension) and spelling levels were measured. The mothers evaluated their own…

  5. Study on the Computational Estimation Performance and Computational Estimation Attitude of Elementary School Fifth Graders in Taiwan

    ERIC Educational Resources Information Center

    Tsao, Yea-Ling; Pan, Ting-Rung

    2011-01-01

    Main purpose of this study is to investigate what level of computational estimation performance is possessed by fifth graders and explore computational estimation attitude towards fifth graders. Two hundred and thirty-five Grade-5 students from four elementary schools in Taipei City were selected for "Computational Estimation Test" and…

  6. Annual Status Report (FY2017): Performance Assessment for the Disposal of Low-Level Waste in the 200 East Area Burial Grounds.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Will E.; Mehta, S.; Nell, R. M.

    This annual review provides the projected dose estimates of radionuclide inventories disposed in the active 200 East Area Low-Level Waste Burial Grounds (LLBGs) since September 26, 1988. The estimates are calculated using the original dose methodology developed in the performance assessment (PA) analysis (WHC-SD-WM-TI-7301). The estimates are compared with performance objectives defined in U.S. Department of Energy (DOE) requirements (DOE O 435.1 Chg 1,2 and companion documents DOE M 435.1-1 Chg 13 and DOE G 435.1-14). All performance objectives are currently satisfied, and operational waste acceptance criteria (HNF-EP-00635) and waste acceptance practices continue to be sufficient to maintain compliance withmore » performance objectives. Inventory estimates and associated dose estimates from future waste disposal actions are unchanged from previous years’ evaluations, which indicate potential impacts well below performance objectives. Therefore, future compliance with DOE O 435.1 Chg 1 is expected.« less

  7. Annual Status Report (FY2017): Performance Assessment for the Disposal of Low-Level Waste in the 200 West Area Burial Grounds.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Will E; Nell, R. M.; Mehta, S.

    This annual review provides the projected dose estimates of radionuclide inventories disposed in the active 200 West Area Low-Level Waste Burial Grounds (LLBGs) since September 26, 1988. These estimates are calculated using the original dose methodology developed in the performance assessment (PA) analysis (WHC-EP-06451). These estimates are compared with performance objectives defined in U.S. Department of Energy (DOE) requirements (DOE O 435.1 Chg 12 and its companion documents DOE M 435.1-1 Chg 13 and DOE G 435.1-14). All performance objectives are currently satisfied, and operational waste acceptance criteria (HNF-EP-00635) and waste acceptance practices continue to be sufficient to maintain compliancemore » with performance objectives. Inventory estimates and associated dose estimates from future waste disposal actions are unchanged from previous years’ evaluations, which indicate potential impacts well below performance objectives. Therefore, future compliance with DOE O 435.1 Chg 1 is expected.« less

  8. Peer influence on students' estimates of performance: social comparison in clinical rotations.

    PubMed

    Raat, A N Janet; Kuks, Jan B M; van Hell, E Ally; Cohen-Schotanus, Janke

    2013-02-01

    During clinical rotations, students move from one clinical situation to another. Questions exist about students' strategies for coping with these transitions. These strategies may include a process of social comparison because in this context it offers the student an opportunity to estimate his or her abilities to master a novel rotation. These estimates are relevant for learning and performance because they are related to self-efficacy. We investigated whether student estimates of their own future performance are influenced by the performance level and gender of the peer with whom the student compares him- or herself. We designed an experimental study in which participating students (n = 321) were divided into groups assigned to 12 different conditions. Each condition entailed a written comparison situation in which a peer student had completed the rotation the participant was required to undertake next. Differences between conditions were determined by the performance level (worse, similar or better) and gender of the comparison peer. The overall grade achieved by the comparison peer remained the same in all conditions. We asked participants to estimate their own future performance in that novel rotation. Differences between their estimates were analysed using analysis of variance (ANOVA). Students' estimates of their future performance were highest when the comparison peer was presented as performing less well and lowest when the comparison peer was presented as performing better (p < 0.001). Estimates of male and female students in same-gender comparison conditions did not differ. In two of three opposite-gender conditions, male students' estimates were higher than those of females (p < 0.001 and p < 0.05, respectively). Social comparison influences students' estimates of their future performance in a novel rotation. The effect depends on the performance level and gender of the comparison peer. This indicates that comparisons against particular peers may strengthen or diminish a student's self-efficacy, which, in turn, may ease or hamper the student's learning during clinical rotations. The study is limited by its experimental design. Future research should focus on students' comparison behaviour in real transitions. © Blackwell Publishing Ltd 2013.

  9. Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.

    PubMed

    Ryan, Andrew M; Burgess, James F; Dimick, Justin B

    2015-08-01

    To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.

  10. Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.

    PubMed

    Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping

    2015-05-01

    This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.

  11. Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground

    NASA Technical Reports Server (NTRS)

    Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.

    2016-01-01

    GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.

  12. Bio-Inspired Distributed Decision Algorithms for Anomaly Detection

    DTIC Science & Technology

    2017-03-01

    TERMS DIAMoND, Local Anomaly Detector, Total Impact Estimation, Threat Level Estimator 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU...21 4.2 Performance of the DIAMoND Algorithm as a DNS-Server Level Attack Detection and Mitigation...with 6 Nodes ........................................................................................ 13 8 Hierarchical 2- Level Topology

  13. Estimating Driving Performance Based on EEG Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Wu, Ruei-Cheng; Jung, Tzyy-Ping; Liang, Sheng-Fu; Huang, Teng-Yi

    2005-12-01

    The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  14. Insomnia and the performance of US workers: results from the America insomnia survey.

    PubMed

    Kessler, Ronald C; Berglund, Patricia A; Coulouvrat, Catherine; Hajak, Goeran; Roth, Thomas; Shahly, Victoria; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K

    2011-09-01

    To estimate the prevalence and associations of broadly defined (i.e., meeting full ICD-10, DSM-IV, or RDC/ICSD-2 inclusion criteria) insomnia with work performance net of comorbid conditions in the America Insomnia Survey (AIS). Cross-sectional telephone survey. National sample of 7,428 employed health plan subscribers (ages 18+). None. Broadly defined insomnia was assessed with the Brief Insomnia Questionnaire (BIQ). Work absenteeism and presenteeism (low on-the-job work performance defined in the metric of lost workday equivalents) were assessed with the WHO Health and Work Performance Questionnaire (HPQ). Regression analysis examined associations between insomnia and HPQ scores controlling 26 comorbid conditions based on self-report and medical/pharmacy claims records. The estimated prevalence of insomnia was 23.2%. Insomnia was significantly associated with lost work performance due to presenteeism (χ² (1) = 39.5, P < 0.001) but not absenteeism (χ² (1) = 3.2, P = 0.07), with an annualized individual-level association of insomnia with presenteeism equivalent to 11.3 days of lost work performance. This estimate decreased to 7.8 days when controls were introduced for comorbid conditions. The individual-level human capital value of this net estimate was $2,280. If we provisionally assume these estimates generalize to the total US workforce, they are equivalent to annualized population-level estimates of 252.7 days and $63.2 billion. Insomnia is associated with substantial workplace costs. Although experimental studies suggest some of these costs could be recovered with insomnia disease management programs, effectiveness trials are needed to obtain precise estimates of return-on-investment of such interventions from the employer perspective.

  15. Quantification of histone modification ChIP-seq enrichment for data mining and machine learning applications

    PubMed Central

    2011-01-01

    Background The advent of ChIP-seq technology has made the investigation of epigenetic regulatory networks a computationally tractable problem. Several groups have applied statistical computing methods to ChIP-seq datasets to gain insight into the epigenetic regulation of transcription. However, methods for estimating enrichment levels in ChIP-seq data for these computational studies are understudied and variable. Since the conclusions drawn from these data mining and machine learning applications strongly depend on the enrichment level inputs, a comparison of estimation methods with respect to the performance of statistical models should be made. Results Various methods were used to estimate the gene-wise ChIP-seq enrichment levels for 20 histone methylations and the histone variant H2A.Z. The Multivariate Adaptive Regression Splines (MARS) algorithm was applied for each estimation method using the estimation of enrichment levels as predictors and gene expression levels as responses. The methods used to estimate enrichment levels included tag counting and model-based methods that were applied to whole genes and specific gene regions. These methods were also applied to various sizes of estimation windows. The MARS model performance was assessed with the Generalized Cross-Validation Score (GCV). We determined that model-based methods of enrichment estimation that spatially weight enrichment based on average patterns provided an improvement over tag counting methods. Also, methods that included information across the entire gene body provided improvement over methods that focus on a specific sub-region of the gene (e.g., the 5' or 3' region). Conclusion The performance of data mining and machine learning methods when applied to histone modification ChIP-seq data can be improved by using data across the entire gene body, and incorporating the spatial distribution of enrichment. Refinement of enrichment estimation ultimately improved accuracy of model predictions. PMID:21834981

  16. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  17. What do parents know about their children's comprehension of emotions? accuracy of parental estimates in a community sample of pre-schoolers.

    PubMed

    Kårstad, S B; Kvello, O; Wichstrøm, L; Berg-Nielsen, T S

    2014-05-01

    Parents' ability to correctly perceive their child's skills has implications for how the child develops. In some studies, parents have shown to overestimate their child's abilities in areas such as IQ, memory and language. Emotion Comprehension (EC) is a skill central to children's emotion regulation, initially learned from their parents. In this cross-sectional study we first tested children's EC and then asked parents to estimate the child's performance. Thus, a measure of accuracy between child performance and parents' estimates was obtained. Subsequently, we obtained information on child and parent factors that might predict parents' accuracy in estimating their child's EC. Child EC and parental accuracy of estimation was tested by studying a community sample of 882 4-year-olds who completed the Test of Emotion Comprehension (TEC). The parents were instructed to guess their children's responses on the TEC. Predictors of parental accuracy of estimation were child actual performance on the TEC, child language comprehension, observed parent-child interaction, the education level of the parent, and child mental health. Ninety-one per cent of the parents overestimated their children's EC. On average, parents estimated that their 4-year-old children would display the level of EC corresponding to a 7-year-old. Accuracy of parental estimation was predicted by child high performance on the TEC, child advanced language comprehension, and more optimal parent-child interaction. Parents' ability to estimate the level of their child's EC was characterized by a substantial overestimation. The more competent the child, and the more sensitive and structuring the parent was interacting with the child, the more accurate the parent was in the estimation of their child's EC. © 2013 John Wiley & Sons Ltd.

  18. Parametric study of helicopter aircraft systems costs and weights

    NASA Technical Reports Server (NTRS)

    Beltramo, M. N.

    1980-01-01

    Weight estimating relationships (WERs) and recurring production cost estimating relationships (CERs) were developed for helicopters at the system level. The WERs estimate system level weight based on performance or design characteristics which are available during concept formulation or the preliminary design phase. The CER (or CERs in some cases) for each system utilize weight (either actual or estimated using the appropriate WER) and production quantity as the key parameters.

  19. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  20. County Variation in Children's and Adolescent's Health Status and School District Performance in California

    PubMed Central

    Jung, Sunyoung

    2008-01-01

    Objectives. We examined the association between county-level estimates of children's health status and school district performance in California. Methods. We used 3 data sources: the California Health Interview Survey, district archives from the California Department of Education, and census-based estimates of county demographic characteristics. We used logistic regression to estimate whether a school district's failure to meet adequate yearly progress goals in 2004 to 2005 was a function of child and adolescent's health status. Models included district- and county-level fixed effects and were adjusted for the clustering of districts within counties. Results. County-level changes in children's and adolescent's health status decreased the likelihood that a school district would fail to meet adequate yearly progress goals during the investigation period. Health status did not moderate the relatively poor performance of predominantly minority districts. Conclusions. We found empirical support that area variation in children's and adolescent's health status exerts a contextual effect on school district performance. Future research should explore the specific mechanisms through which area-level child health influences school and district achievement. PMID:18309137

  1. Insomnia and the Performance of US Workers: Results from the America Insomnia Survey

    PubMed Central

    Kessler, Ronald C.; Berglund, Patricia A.; Coulouvrat, Catherine; Hajak, Goeran; Roth, Thomas; Shahly, Victoria; Shillington, Alicia C.; Stephenson, Judith J.; Walsh, James K.

    2011-01-01

    Study Objectives: To estimate the prevalence and associations of broadly defined (i.e., meeting full ICD-10, DSM-IV, or RDC/ICSD-2 inclusion criteria) insomnia with work performance net of comorbid conditions in the America Insomnia Survey (AIS). Design/Setting: Cross-sectional telephone survey. Participants: National sample of 7,428 employed health plan subscribers (ages 18+). Interventions: None. Measurements and Results: Broadly defined insomnia was assessed with the Brief Insomnia Questionnaire (BIQ). Work absenteeism and presenteeism (low on-the-job work performance defined in the metric of lost workday equivalents) were assessed with the WHO Health and Work Performance Questionnaire (HPQ). Regression analysis examined associations between insomnia and HPQ scores controlling 26 comorbid conditions based on self-report and medical/pharmacy claims records. The estimated prevalence of insomnia was 23.2%. Insomnia was significantly associated with lost work performance due to presenteeism (χ21 = 39.5, P < 0.001) but not absenteeism (χ21 = 3.2, P = 0.07), with an annualized individual-level association of insomnia with presenteeism equivalent to 11.3 days of lost work performance. This estimate decreased to 7.8 days when controls were introduced for comorbid conditions. The individual-level human capital value of this net estimate was $2,280. If we provisionally assume these estimates generalize to the total US workforce, they are equivalent to annualized population-level estimates of 252.7 days and $63.2 billion. Conclusions: Insomnia is associated with substantial workplace costs. Although experimental studies suggest some of these costs could be recovered with insomnia disease management programs, effectiveness trials are needed to obtain precise estimates of return-on-investment of such interventions from the employer perspective. Citation: Kessler RC; Berglund PA; Coulouvrat C; Hajak G; Roth T; Shahly V; Shillington AC; Stephenson JJ; Walsh JK. Insomnia and the performance of US workers: results from the America Insomnia Survey.SLEEP 2011;34(9):1161-1171. PMID:21886353

  2. Comparison of LiDAR- and photointerpretation-based estimates of canopy cover

    Treesearch

    Demetrios Gatziolis

    2012-01-01

    An evaluation of the agreement between photointerpretation- and LiDARbased estimates of canopy cover was performed using 397 90 x 90 m reference areas in Oregon. It was determined that at low canopy cover levels LiDAR estimates tend to exceed those from photointerpretation and that this tendency reverses at high canopy cover levels. Characteristics of the airborne...

  3. Procedures for estimating the frequency of commercial airline flights encountering high cabin ozone levels

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1979-01-01

    Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.

  4. Comparisons of Student Achievement Levels by District Performance and Poverty. ACT Research Report Series 2016-11

    ERIC Educational Resources Information Center

    Dougherty, Chrys; Shaw, Teresa

    2016-01-01

    This report looks at student achievement levels in Arkansas school districts disaggregated by district poverty and by the district's performance relative to other districts. We estimated district performance statistics by subject and grade level (4, 8, and 11-12) for longitudinal student cohorts, using statistical models that adjusted for district…

  5. C-5 Reliability Enhancement and Re-engining Program (C-5 RERP)

    DTIC Science & Technology

    2015-12-01

    Production Estimate Current APB Production Objective/Threshold Demonstrated Performance Current Estimate Time To Climb/Initial Level Off 837,000 lbs...RCR - Runway Condition Reading SDD - System Design and Development SL - Sea Level C-5 RERP December 2015 SAR March 23, 2016 16:10:28 UNCLASSIFIED 12...5.3 5.3 Acq O&M 0.0 0.0 -- 0.0 0.0 0.0 0.0 Total 7146.6 7135.7 N/A 6698.0 7694.1 7510.7 7066.6 Confidence Level Confidence Level of cost estimate

  6. Revisiting global mean sea level budget closure : Preliminary results from an integrative study within ESA's Climate Change Initiative -Sea level Budget Closure-Climate Change Initiative

    NASA Astrophysics Data System (ADS)

    Palanisamy, H.; Cazenave, A. A.

    2017-12-01

    The global mean sea level budget is revisited over two time periods: the entire altimetry era, 1993-2015 and the Argo/GRACE era, 2003-2015 using the version '0' of sea level components estimated by the SLBC-CCI teams. The SLBC-CCI is an European Space Agency's project on sea level budget closure using CCI products. Over the entire altimetry era, the sea level budget was performed as the sum of steric and mass components that include contributions from total land water storage, glaciers, ice sheets (Greenland and Antarctica) and total water vapor content. Over the Argo/GRACE era, it was performed as the sum of steric and GRACE based ocean mass. Preliminary budget analysis performed over the altimetry era (1993-2015) results in a trend value of 2.83 mm/yr. On comparison with the observed altimetry-based global mean sea level trend over the same period (3.03 ± 0.5 mm/yr), we obtain a residual of 0.2 mm/yr. In spite of a residual of 0.2 mm/yr, the sea level budget result obtained over the altimetry era is very promising as this has been performed using the version '0' of the sea level components. Furthermore, uncertainties are not yet included in this study as uncertainty estimation for each sea level component is currently underway. Over the Argo/GRACE era (2003-2015), the trend estimated from the sum of steric and GRACE ocean mass amounts to 2.63 mm/yr while that observed by satellite altimetry is 3.37 mm/yr, thereby leaving a residual of 0.7 mm/yr. Here an ensemble GRACE ocean mass data (mean of various available GRACE ocean mass data) was used for the estimation. Using individual GRACE data results in a residual range of 0.5 mm/yr -1.1 mm/yr. Investigations are under way to determine the cause of the vast difference between the observed sea level and the sea level obtained from steric and GRACE ocean mass. One main suspect is the impact of GRACE data gaps on sea level budget analysis due to lack of GRACE data over several months since 2011. The current action plan of the project is to work on an accurate closure of the sea level budget using both the above performed methodologies. We also intend to provide a standardized uncertainty estimation and to correctly identify the causes leading to sea level budget non-closure if that is the case.

  7. Variability and predictability of finals times of elite rowers.

    PubMed

    Smith, Tiaki Brett; Hopkins, Will G

    2011-11-01

    Little is known about the competitive performance characteristics of elite rowers. We report here analyses of performance times for finalists in world-class regattas from 1999 to 2009. The data were official race times for the 10 men's and 7 women's single and crewed boat classes, each with ∼ 200-300 different boats competing in 1-33 of the 46 regattas at 18 venues. A linear mixed model of race times for each boat class provided estimates of variability as coefficients of variation after adjustment for means of calendar year, level of competition (Olympics, world championship, World Cup), venue, and level of final (A, B, C, …). Mean performance was substantially slower between consecutive levels of competition (1.5%, 2.7%) and consecutive levels of finals (∼ 1%-2%). Differences in the effects of venue and of environmental conditions, estimated as variability in mean race time between venues and finals, were extremely large (∼ 3.0%). Within-boat race-to-race variability for A finalists was 1.1% for single sculls and 0.9% for crewed boats, with little difference between men and women and only a small increase in lower-level finalists. Predictability of performance, expressed as intraclass correlation coefficients, showed considerable differences between boat classes, but the mean was high (∼ 0.63), with little difference between crewed and single boats, between men and women, and between within and between years. The race-to-race variability of boat times of ∼ 1.0% is similar to that in comparable endurance sports performed against water or air resistance. Estimates of the smallest important performance enhancement (∼ 0.3%) and the effects of level of competition, level of final, venue, environment, and boat class will help inform investigations of factors affecting elite competitive rowing performance.

  8. Spectral imaging using consumer-level devices and kernel-based regression.

    PubMed

    Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko

    2016-06-01

    Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.

  9. Effect of patient selection method on provider group performance estimates.

    PubMed

    Thorpe, Carolyn T; Flood, Grace E; Kraft, Sally A; Everett, Christine M; Smith, Maureen A

    2011-08-01

    Performance measurement at the provider group level is increasingly advocated, but different methods for selecting patients when calculating provider group performance have received little evaluation. We compared 2 currently used methods according to characteristics of the patients selected and impact on performance estimates. We analyzed Medicare claims data for fee-for-service beneficiaries with diabetes ever seen at an academic multispeciality physician group in 2003 to 2004. We examined sample size, sociodemographics, clinical characteristics, and receipt of recommended diabetes monitoring in 2004 for the groups of patients selected using 2 methods implemented in large-scale performance initiatives: the Plurality Provider Algorithm and the Diabetes Care Home method. We examined differences among discordantly assigned patients to determine evidence for differential selection regarding these measures. Fewer patients were selected under the Diabetes Care Home method (n=3558) than the Plurality Provider Algorithm (n=4859). Compared with the Plurality Provider Algorithm, the Diabetes Care Home method preferentially selected patients who were female, not entitled because of disability, older, more likely to have hypertension, and less likely to have kidney disease and peripheral vascular disease, and had lower levels of predicted utilization. Diabetes performance was higher under Diabetes Care Home method, with 67% versus 58% receiving >1 A1c tests, 70% versus 65% receiving ≥1 low-density lipoprotein (LDL) test, and 38% versus 37% receiving an eye examination. The method used to select patients when calculating provider group performance may affect patient case mix and estimated performance levels, and warrants careful consideration when comparing performance estimates.

  10. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Building Structures

    USGS Publications Warehouse

    Çelebi, Mehmet

    1998-01-01

    Several approaches are used to assess the performance of the built environment following an earthquake -- preliminary damage surveys conducted by professionals, detailed studies of individual structures, and statistical analyses of groups of structures. Reports of damage that are issued by many organizations immediately following an earthquake play a key role in directing subsequent detailed investigations. Detailed studies of individual structures and statistical analyses of groups of structures may be motivated by particularly good or bad performance during an earthquake. Beyond this, practicing engineers typically perform stress analyses to assess the performance of a particular structure to vibrational levels experienced during an earthquake. The levels may be determined from recorded or estimated ground motions; actual levels usually differ from design levels. If a structure has seismic instrumentation to record response data, the estimated and recorded response and behavior of the structure can be compared.

  11. Estimating groundwater levels using system identification models in Nzhelele and Luvuvhu areas, Limpopo Province, South Africa

    NASA Astrophysics Data System (ADS)

    Makungo, Rachel; Odiyo, John O.

    2017-08-01

    This study was focused on testing the ability of a coupled linear and non-linear system identification model in estimating groundwater levels. System identification provides an alternative approach for estimating groundwater levels in areas that lack data required by physically-based models. It also overcomes the limitations of physically-based models due to approximations, assumptions and simplifications. Daily groundwater levels for 4 boreholes, rainfall and evaporation data covering the period 2005-2014 were used in the study. Seventy and thirty percent of the data were used to calibrate and validate the model, respectively. Correlation coefficient (R), coefficient of determination (R2), root mean square error (RMSE), percent bias (PBIAS), Nash Sutcliffe coefficient of efficiency (NSE) and graphical fits were used to evaluate the model performance. Values for R, R2, RMSE, PBIAS and NSE ranged from 0.8 to 0.99, 0.63 to 0.99, 0.01-2.06 m, -7.18 to 1.16 and 0.68 to 0.99, respectively. Comparisons of observed and simulated groundwater levels for calibration and validation runs showed close agreements. The model performance mostly varied from satisfactory, good, very good and excellent. Thus, the model is able to estimate groundwater levels. The calibrated models can reasonably capture description between input and output variables and can, thus be used to estimate long term groundwater levels.

  12. Design of a two-level power system linear state estimator

    NASA Astrophysics Data System (ADS)

    Yang, Tao

    The availability of synchro-phasor data has raised the possibility of a linear state estimator if the inputs are only complex currents and voltages and if there are enough such measurements to meet observability and redundancy requirements. Moreover, the new digital substations can perform some of the computation at the substation itself resulting in a more accurate two-level state estimator. The objective of this research is to develop a two-level linear state estimator processing synchro-phasor data and estimating the states at both the substation level and the control center level. Both the mathematical algorithms that are different from those in the present state estimation procedure and the layered architecture of databases, communications and application programs that are required to support this two-level linear state estimator are described in this dissertation. Besides, as the availability of phasor measurements at substations will increase gradually, this research also describes how the state estimator can be enhanced to handle both the traditional state estimator and the proposed linear state estimator simultaneously. This provides a way to immediately utilize the benefits in those parts of the system where such phasor measurements become available and provides a pathway to transition to the smart grid of the future. The design procedure of the two-level state estimator is applied to two study systems. The first study system is the IEEE-14 bus system. The second one is the 179 bus Western Electricity Coordinating Council (WECC) system. The static database for the substations is constructed from the power flow data of these systems and the real-time measurement database is produced by a power system dynamic simulating tool (TSAT). Time-skew problems that may be caused by communication delays are also considered and simulated. We used the Network Simulator (NS) tool to simulate a simple communication system and analyse its time delay performance. These time delays were too small to affect the results especially since the measurement data is time-stamped and the state estimator for these small systems could be run with subseconf frequency. Keywords: State Estimation, Synchro-Phasor Measurement, Distributed System, Energy Control Center, Substation, Time-skew

  13. PRELIMINARY ESTIMATES OF PERFORMANCE AND COST OF MERCURY EMISSION CONTROL TECHNOLOGY APPLICATIONS ON ELECTRIC UTILITY BOILERS: AN UPDATE

    EPA Science Inventory

    The paper presents estimates of performance levels and related costs associated with controlling mercury (Hg) emissions from coal-fired power plants using either powdered activated carbon (PAC) injection or multipollutant control in which Hg capture is enhanced in existing and ne...

  14. Conventional Rapid Latex Agglutination in Estimation of von Willebrand Factor: Method Revisited and Potential Clinical Applications

    PubMed Central

    Che Hussin, Che Maraina

    2014-01-01

    Measurement of von Willebrand factor antigen (VWF : Ag) levels is usually performed in a specialised laboratory which limits its application in routine clinical practice. So far, no commercial rapid test kit is available for VWF : Ag estimation. This paper discusses the technical aspect of latex agglutination method which was established to suit the purpose of estimating von Willebrand factor (VWF) levels in the plasma sample. The latex agglutination test can be performed qualitatively and semiquantitatively. Reproducibility, stability, linearity, limit of detection, interference, and method comparison studies were conducted to evaluate the performance of this test. Semiquantitative latex agglutination test was strongly correlated with the reference immunoturbidimetric assay (Spearman's rho = 0.946, P < 0.001, n = 132). A substantial agreement (κ = 0.77) was found between qualitative latex agglutination test and the reference assay. Using the scoring system for the rapid latex test, no agglutination is with 0% VWF : Ag (control negative), 1+ reaction is equivalent to <20% VWF : Ag, and 4+ reaction indicates >150% VWF : Ag (when comparing with immunoturbidimetric assay). The findings from evaluation studies suggest that latex agglutination method is suitable to be used as a rapid test kit for the estimation of VWF : Ag levels in various clinical conditions associated with high levels and low levels of VWF : Ag. PMID:25759835

  15. Information's role in the estimation of chaotic signals

    NASA Astrophysics Data System (ADS)

    Drake, Daniel Fred

    1998-11-01

    Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.

  16. Statistical fusion of continuous labels: identification of cardiac landmarks

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.

    2011-03-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  17. Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.

    PubMed

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A

    2011-01-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  18. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution.

    PubMed

    Harrison, Xavier A

    2015-01-01

    Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.

  19. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  20. A Novel Strategy of Ambiguity Correction for the Improved Faraday Rotation Estimator in Linearly Full-Polarimetric SAR Data.

    PubMed

    Li, Jinhui; Ji, Yifei; Zhang, Yongsheng; Zhang, Qilei; Huang, Haifeng; Dong, Zhen

    2018-04-10

    Spaceborne synthetic aperture radar (SAR) missions operating at low frequencies, such as L-band or P-band, are significantly influenced by the ionosphere. As one of the serious ionosphere effects, Faraday rotation (FR) is a remarkable distortion source for the polarimetric SAR (PolSAR) application. Various published FR estimators along with an improved one have been introduced to solve this issue, all of which are implemented by processing a set of PolSAR real data. The improved estimator exhibits optimal robustness based on performance analysis, especially in term of the system noise. However, all published estimators, including the improved estimator, suffer from a potential FR angle (FRA) ambiguity. A novel strategy of the ambiguity correction for those FR estimators is proposed and shown as a flow process, which is divided into pixel-level and image-level correction. The former is not yet recognized and thus is considered in particular. Finally, the validation experiments show a prominent performance of the proposed strategy.

  1. The development of response surface pathway design to reduce animal numbers in toxicity studies

    PubMed Central

    2014-01-01

    Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560

  2. Patriot Advanced Capability-3 Missile Segment Enhancement (PAC-3 MSE)

    DTIC Science & Technology

    2015-12-01

    Threshold Demonstrated Performance Current Estimate System Training Proficiency Level Soldiers (Operators, Maintainers, and Leaders) are able to...constructive training environments by using TADSS to conduct multi- level training for both operators and maintenance personnel. With the addition...0.0 Total 6037.0 6037.0 N/A 6276.9 6722.3 6722.3 6900.3 Current APB Cost Estimate Reference Army Cost Position dated February 28, 2014 Confidence Level

  3. Validity of the Nike+ device during walking and running.

    PubMed

    Kane, N A; Simmons, M C; John, D; Thompson, D L; Bassett, D R; Basset, D R

    2010-02-01

    We determined the validity of the Nike+ device for estimating speed, distance, and energy expenditure (EE) during walking and running. Twenty trained individuals performed a maximal oxygen uptake test and underwent anthropometric and body composition testing. Each participant was outfitted with a Nike+ sensor inserted into the shoe and an Apple iPod nano. They performed eight 6-min stages on the treadmill, including level walking at 55, 82, and 107 m x min(-1), inclined walking (82 m x min(-1)) at 5 and 10% grades, and level running at 134, 161, and 188 m x min(-1). Speed was measured using a tachometer and EE was measured by indirect calorimetry. Results showed that the Nike+ device overestimated the speed of level walking at 55 m x min(-1) by 20%, underestimated the speed of level walking at 107 m x min(-1) by 12%, but closely estimated the speed of level walking at 82 m x min(-1), and level running at all speeds (p<0.05). Similar results were found for distance. The Nike+ device overestimated the EE of level walking by 18-37%, but closely estimated the EE of level running (p<0.05). In conclusion the Nike+ in-shoe device provided reasonable estimates of speed and distance during level running at the three speeds tested in this study. However, it overestimated EE during level walking and it did not detect the increased cost of inclined locomotion.

  4. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  5. Test Experience Effects in Longitudinal Comparisons of Adult Cognitive Functioning

    ERIC Educational Resources Information Center

    Salthouse, Timothy

    2015-01-01

    It is widely recognized that experience with cognitive tests can influence estimates of cognitive change. Prior research has estimated experience effects at the level of groups by comparing the performance of a group of participants tested for the second time with the performance of a different group of participants at the same age tested for the…

  6. The Relationship between Student Transfers and District Academic Performance: Accounting for Feedback Effects

    ERIC Educational Resources Information Center

    Welsch, David M.; Zimmer, David M.

    2015-01-01

    This paper draws attention to a subtle, but concerning, empirical challenge common in panel data models that seek to estimate the relationship between student transfers and district academic performance. Specifically, if such models have a dynamic element, and if the estimator controls for unobserved traits by including district-level effects,…

  7. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  8. Is objective and accurate cognitive assessment across the menstrual cycle possible? A feasibility study

    PubMed Central

    Neill, Jo; Scally, Andy; Tuffnell, Derek; Marshall, Kay

    2015-01-01

    Objectives: Variation in plasma hormone levels influences the neurobiology of brain regions involved in cognition and emotion processing. Fluctuations in hormone levels across the menstrual cycle could therefore alter cognitive performance and wellbeing; reports have provided conflicting results, however. The aim of this study was to assess whether objective assessment of cognitive performance and self-reported wellbeing during the follicular and luteal phases of the menstrual cycle is feasible and investigate the possible reasons for variation in effects previously reported. Methods: The Cambridge Neuropsychological Test Automated Battery and Edinburgh Postnatal Depression Scale were used to assess the cognitive performance and wellbeing of 12 women. Data were analysed by self-reported and hormone-estimated phases of the menstrual cycle. Results: Recruitment to the study and assessment of cognition and wellbeing was without issue. Plasma hormone and peptide estimation showed substantial individual variation and suggests inaccuracy in self-reported menstrual phase estimation. Conclusion: Objective assessment of cognitive performance and self-assessed wellbeing across the menstrual cycle is feasible. Grouping data by hormonal profile rather by self-reported phase estimation may influence phase-mediated results. Future studies should use plasma hormone and peptide profiles to estimate cycle phase and group data for analyses. PMID:26770760

  9. The impact of cognitive load on reward evaluation.

    PubMed

    Krigolson, Olave E; Hassall, Cameron D; Satel, Jason; Klein, Raymond M

    2015-11-19

    The neural systems that afford our ability to evaluate rewards and punishments are impacted by a variety of external factors. Here, we demonstrate that increased cognitive load reduces the functional efficacy of a reward processing system within the human medial-frontal cortex. In our paradigm, two groups of participants used performance feedback to estimate the exact duration of one second while electroencephalographic (EEG) data was recorded. Prior to performing the time estimation task, both groups were instructed to keep their eyes still and avoid blinking in line with well established EEG protocol. However, during performance of the time-estimation task, one of the two groups was provided with trial-to-trial-feedback about their performance on the time-estimation task and their eye movements to induce a higher level of cognitive load relative to participants in the other group who were solely provided with feedback about the accuracy of their temporal estimates. In line with previous work, we found that the higher level of cognitive load reduced the amplitude of the feedback-related negativity, a component of the human event-related brain potential associated with reward evaluation within the medial-frontal cortex. Importantly, our results provide further support that increased cognitive load reduces the functional efficacy of a neural system associated with reward processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Estimation of gingival crevicular blood glucose level for the screening of diabetes mellitus: A simple yet reliable method.

    PubMed

    Parihar, Sarita; Tripathi, Richik; Parihar, Ajit Vikram; Samadi, Fahad M; Chandra, Akhilesh; Bhavsar, Neeta

    2016-01-01

    This study was designed to assess the reliability of blood glucose level estimation in gingival crevicular blood(GCB) for screening diabetes mellitus. 70 patients were included in study. A randomized, double-blind clinical trial was performed. Among these, 39 patients were diabetic (including 4 patients who were diagnosed during the study) and rest 31 patients were non-diabetic. GCB obtained during routine periodontal examination was analyzed by glucometer to know blood glucose level. The same patient underwent for finger stick blood (FSB) glucose level estimation with glucometer and venous blood (VB) glucose level with standardized laboratory method as per American Diabetes Association Guidelines. 1 All the three blood glucose levels were compared. Periodontal parameters were also recorded including gingival index (GI) and probing pocket depth (PPD). A strong positive correlation ( r ) was observed between glucose levels of GCB with FSB and VB with the values of 0.986 and 0.972 in diabetic group and 0.820 and 0.721 in non-diabetic group. As well, the mean values of GI and PPD were more in diabetic group than non-diabetic group with the statistically significant difference ( p  < 0.005). GCB can be reliably used to measure the blood glucose level as the values were closest to glucose levels estimated by VB. The technique is safe, easy to perform and non-invasive to the patient and can increase the frequency of diagnosing diabetes during routine periodontal therapy.

  11. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  12. Air Traffic Demand Estimates for 1995

    DOT National Transportation Integrated Search

    1975-04-01

    The forecasts provide a range of reasonable 1995 activity levels for analyzing and comparing cost and performance characteristics of future air traffic management system concept alternatives. High and low estimates of the various demand measures are ...

  13. Estimating the parasitaemia of Plasmodium falciparum: experience from a national EQA scheme

    PubMed Central

    2013-01-01

    Background To examine performance of the identification and estimation of percentage parasitaemia of Plasmodium falciparum in stained blood films distributed in the UK National External Quality Assessment Scheme (UKNEQAS) Blood Parasitology Scheme. Methods Analysis of performance for the diagnosis and estimation of the percentage parasitaemia of P. falciparum in Giemsa-stained thin blood films was made over a 15-year period to look for trends in performance. Results An average of 25% of participants failed to estimate the percentage parasitaemia, 17% overestimated and 8% underestimated, whilst 5% misidentified the malaria species present. Conclusions Although the results achieved by participants for other blood parasites have shown an overall improvement, the level of performance for estimation of the parasitaemia of P. falciparum remains unchanged over 15 years. Possible reasons include incorrect calculation, not examining the correct part of the film and not examining an adequate number of microscope fields. PMID:24261625

  14. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  15. Estimation and correlation of salivary thiocyanate levels in periodontally healthy subjects, smokers, nonsmokers, and gutka-chewers with chronic periodontitis.

    PubMed

    Hegde, Shashikanth; Chatterjee, Elashri; Rajesh, K S; Kumar, M S Arun

    2016-01-01

    This study was conducted to estimate and correlate salivary thiocyanate (SCN) levels in periodontally healthy subjects, smokers, nonsmokers, and gutka-chewers with chronic periodontitis. The study population consisted of 40 systemically healthy subjects in the age group of 18-55 years that was further divided into four groups: Control, smokers, nonsmokers, and gutka-chewers with chronic periodontitis. Gingival index (GI) (Loe and Silness-1963), probing depth (PD), clinical attachment loss was assessed. Estimation of SCN was performed by ultraviolet spectrophotometer at 447 nm wavelength. Statistical analysis was performed using the one-way ANOVAs Welch test and Pearson's correlation test using SPSS version 17 software. Results showed statistically significant increase in SCN levels in smokers as compared to gutka-chewers with chronic periodontitis, control, and nonsmokers with chronic periodontitis subjects. Significantly higher PD and loss of attachment were seen in smokers group compared with other groups. A negative correlation observed between the GI and thiocyanate levels. The present study revealed a significant increase in SCN levels in smokers with periodontitis as compared to nonsmokers.

  16. Improving RNA-Seq expression estimates by correcting for fragment bias

    PubMed Central

    2011-01-01

    The biochemistry of RNA-Seq library preparation results in cDNA fragments that are not uniformly distributed within the transcripts they represent. This non-uniformity must be accounted for when estimating expression levels, and we show how to perform the needed corrections using a likelihood based approach. We find improvements in expression estimates as measured by correlation with independently performed qRT-PCR and show that correction of bias leads to improved replicability of results across libraries and sequencing technologies. PMID:21410973

  17. Demosaicking of noisy Bayer-sampled color images with least-squares luma-chroma demultiplexing and noise level estimation.

    PubMed

    Jeon, Gwanggil; Dubois, Eric

    2013-01-01

    This paper adapts the least-squares luma-chroma demultiplexing (LSLCD) demosaicking method to noisy Bayer color filter array (CFA) images. A model is presented for the noise in white-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red, green, and blue color channels is then developed. Based on the estimated noise parameters, one of a finite set of configurations adapted to a particular level of noise is selected to demosaic the noisy data. The noise-adaptive demosaicking scheme is called LSLCD with noise estimation (LSLCD-NE). Experimental results demonstrate state-of-the-art performance over a wide range of noise levels, with low computational complexity. Many results with several algorithms, noise levels, and images are presented on our companion web site along with software to allow reproduction of our results.

  18. Evaluating detection and estimation capabilities of magnetometer-based vehicle sensors

    NASA Astrophysics Data System (ADS)

    Slater, David M.; Jacyna, Garry M.

    2013-05-01

    In an effort to secure the northern and southern United States borders, MITRE has been tasked with developing Modeling and Simulation (M&S) tools that accurately capture the mapping between algorithm-level Measures of Performance (MOP) and system-level Measures of Effectiveness (MOE) for current/future surveillance systems deployed by the the Customs and Border Protection Office of Technology Innovations and Acquisitions (OTIA). This analysis is part of a larger M&S undertaking. The focus is on two MOPs for magnetometer-based Unattended Ground Sensors (UGS). UGS are placed near roads to detect passing vehicles and estimate properties of the vehicle's trajectory such as bearing and speed. The first MOP considered is the probability of detection. We derive probabilities of detection for a network of sensors over an arbitrary number of observation periods and explore how the probability of detection changes when multiple sensors are employed. The performance of UGS is also evaluated based on the level of variance in the estimation of trajectory parameters. We derive the Cramer-Rao bounds for the variances of the estimated parameters in two cases: when no a priori information is known and when the parameters are assumed to be Gaussian with known variances. Sample results show that UGS perform significantly better in the latter case.

  19. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information.

    PubMed

    Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.

  20. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information

    PubMed Central

    Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290

  1. Trajectories of physical function prior to death and brain neuropathology in a community-based cohort: the act study.

    PubMed

    LaCroix, Andrea Z; Hubbard, Rebecca A; Gray, Shelly L; Anderson, Melissa L; Crane, Paul K; Sonnen, Joshua A; Zaslavsky, Oleg; Larson, Eric B

    2017-11-02

    Mechanisms linking cognitive and physical functioning in older adults are unclear. We sought to determine whether brain pathological changes relate to the level or rate of physical performance decline. This study analyzed data from 305 participants in the autopsy subcohort of the prospective Adult Changes in Thought (ACT) study. Participants were aged 65+ and free of dementia at enrollment. Physical performance was measured at baseline and every two years using the Short Physical Performance Battery (SPPB). Data from 3174 ACT participants with ≥2 SPPB measurements were used to estimate two physical function measures: 1) rate of SPPB decline defined by intercept and slope; and 2) estimated SPPB 5 years prior to death. Neuropathology findings at autopsy included neurofibrillary tangles (Braak stage), neuritic plaques (CERAD level), presence of amyloid angiopathy, microinfarcts, cystic infarcts, and Lewy bodies. Associations (adjusted for sex, age, body mass index and education) between dichotomized neuropathologic outcomes and SPPB measures were estimated using modified Poisson regression with inverse probability weights (IPW) estimated via Generalized Estimating Equations (GEE). Relative risks for the 20 th , 40 th , and 60 th percentiles (lowest levels and highest rates of decline) relative to the 80th percentile (highest level and lowest rate of decline) were calculated. Decedents with the least vs. most SPPB decline (slope > 75 th vs. < 25 th percentiles) had higher SPPB scores, and were more likely to be male, older, have higher education, and exercise regularly at baseline. No significant associations were observed between neuropathology findings and rate of SPPB decline. Lower predicted SPPB scores 5 years prior to death were associated with higher risk of microinfarcts (RR = 3.08, 95% confidence interval (CI) 0.93-1.07 for the 20 th vs. 80 th percentiles of SPPB) and significantly higher risk of cystic infarcts (RR = 2.72, 95% CI 1.45-5.57 for 20 th vs. 80 th percentiles of SPPB). Cystic infarcts and microinfarcts, but not neuropathology findings of Alzheimer's disease, were related to physical performance levels five years before death. No pathology findings were associated with rates of physical performance decline. Physical function levels in the years prior to death may be affected by vascular brain pathologies.

  2. Non-contact estimation of heart rate and oxygen saturation using ambient light.

    PubMed

    Bal, Ufuk

    2015-01-01

    We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings.

  3. Non-contact estimation of heart rate and oxygen saturation using ambient light

    PubMed Central

    Bal, Ufuk

    2014-01-01

    We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings PMID:25657877

  4. Equity and geography: the case of child mortality in Papua New Guinea.

    PubMed

    Bauze, Anna E; Tran, Linda N; Nguyen, Kim-Huong; Firth, Sonja; Jimenez-Soto, Eliana; Dwyer-Lindgren, Laura; Hodge, Andrew; Lopez, Alan D

    2012-01-01

    Recent assessments show continued decline in child mortality in Papua New Guinea (PNG), yet complete subnational analyses remain rare. This study aims to estimate under-five mortality in PNG at national and subnational levels to examine the importance of geographical inequities in health outcomes and track progress towards Millennium Development Goal (MDG) 4. We performed retrospective data validation of the Demographic and Health Survey (DHS) 2006 using 2000 Census data, then applied advanced indirect methods to estimate under-five mortality rates between 1976 and 2000. The DHS 2006 was found to be unreliable. Hence we used the 2000 Census to estimate under-five mortality rates at national and subnational levels. During the period under study, PNG experienced a slow reduction in national under-five mortality from approximately 103 to 78 deaths per 1,000 live births. Subnational analyses revealed significant disparities between rural and urban populations as well as inter- and intra-regional variations. Some of the provinces that performed the best (worst) in terms of under-five mortality included the districts that performed worst (best), with district-level under-five mortality rates correlating strongly with poverty levels and access to services. The evidence from PNG demonstrates substantial within-province heterogeneity, suggesting that under-five mortality needs to be addressed at subnational levels. This is especially relevant in countries, like PNG, where responsibility for health services is devolved to provinces and districts. This study presents the first comprehensive estimates of under-five mortality at the district level for PNG. The results demonstrate that for countries that rely on few data sources even greater importance must be given to the quality of future population surveys and to the exploration of alternative options of birth and death surveillance.

  5. Interval estimation of the overall treatment effect in a meta-analysis of a few small studies with zero events.

    PubMed

    Pateras, Konstantinos; Nikolakopoulos, Stavros; Mavridis, Dimitris; Roes, Kit C B

    2018-03-01

    When a meta-analysis consists of a few small trials that report zero events, accounting for heterogeneity in the (interval) estimation of the overall effect is challenging. Typically, we predefine meta-analytical methods to be employed. In practice, data poses restrictions that lead to deviations from the pre-planned analysis, such as the presence of zero events in at least one study arm. We aim to explore heterogeneity estimators behaviour in estimating the overall effect across different levels of sparsity of events. We performed a simulation study that consists of two evaluations. We considered an overall comparison of estimators unconditional on the number of observed zero cells and an additional one by conditioning on the number of observed zero cells. Estimators that performed modestly robust when (interval) estimating the overall treatment effect across a range of heterogeneity assumptions were the Sidik-Jonkman, Hartung-Makambi and improved Paul-Mandel. The relative performance of estimators did not materially differ between making a predefined or data-driven choice. Our investigations confirmed that heterogeneity in such settings cannot be estimated reliably. Estimators whose performance depends strongly on the presence of heterogeneity should be avoided. The choice of estimator does not need to depend on whether or not zero cells are observed.

  6. Do Accountability and Voucher Threats Improve Low-Performing Schools? NBER Working Paper No. 11597

    ERIC Educational Resources Information Center

    Figlio, David N.; Rouse, Cecilia

    2005-01-01

    In this paper we study the effects of the threat of school vouchers and school stigma in Florida on the performance of "low-performing" schools using student-level data from a subset of districts. Estimates of the change in school-level high-stakes test scores from the first year of the reform are consistent with the early results used…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaleel, R.; Mehta, S.; Nichols, W. E.

    This annual review provides the projected dose estimates of radionuclide inventories disposed in the active 200 West Area Low-Level Burial Grounds (LLBGs) since September 26, 1988. These estimates area calculated using the original does methodology developed in the performance assessment (PA) analysis (WHC-EP-0645).

  8. Potential-scour assessments and estimates of scour depth using different techniques at selected bridge sites in Missouri

    USGS Publications Warehouse

    Huizinga, Richard J.; Rydlund, Jr., Paul H.

    2004-01-01

    The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by the rapid estimation method (Level 1+) for abutment scour, but there was substantial variability in the estimates and several substantial underestimations.

  9. Is the formula of Traub still up to date in antemortem blood glucose level estimation?

    PubMed

    Palmiere, Cristian; Sporkert, Frank; Vaucher, Paul; Werner, Dominique; Bardy, Daniel; Rey, François; Lardi, Christelle; Brunel, Christophe; Augsburger, Marc; Mangin, Patrice

    2012-05-01

    According to the hypothesis of Traub, also known as the 'formula of Traub', postmortem values of glucose and lactate found in the cerebrospinal fluid or vitreous humor are considered indicators of antemortem blood glucose levels. However, because the lactate concentration increases in the vitreous and cerebrospinal fluid after death, some authors postulated that using the sum value to estimate antemortem blood glucose levels could lead to an overestimation of the cases of glucose metabolic disorders with fatal outcomes, such as diabetic ketoacidosis. The aim of our study, performed on 470 consecutive forensic cases, was to ascertain the advantages of the sum value to estimate antemortem blood glucose concentrations and, consequently, to rule out fatal diabetic ketoacidosis as the cause of death. Other biochemical parameters, such as blood 3-beta-hydroxybutyrate, acetoacetate, acetone, glycated haemoglobin and urine glucose levels, were also determined. In addition, postmortem native CT scan, autopsy, histology, neuropathology and toxicology were performed to confirm diabetic ketoacidosis as the cause of death. According to our results, the sum value does not add any further information for the estimation of antemortem blood glucose concentration. The vitreous glucose concentration appears to be the most reliable marker to estimate antemortem hyperglycaemia and, along with the determination of other biochemical markers (such as blood acetone and 3-beta-hydroxybutyrate, urine glucose and glycated haemoglobin), to confirm diabetic ketoacidosis as the cause of death.

  10. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  11. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  12. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties.

    PubMed

    Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.

  13. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    PubMed

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  14. Variations in Carboxyhaemoglobin Levels in Smokers

    PubMed Central

    Castleden, C. M.; Cole, P. V.

    1974-01-01

    Three experiments on smokers have been performed to determine variations in blood levels of carboxyhaemoglobin (COHb) throughout the day and night and whether a random measurement of COHb gives a true estimation of a smoker's mean COHb level. In the individual smoker the COHb level does not increase gradually during the day but is kept within relatively narrow limits. Moderately heavy smokers rise in the morning with a substantially raised COHb level because the half life of COHb is significantly longer during sleep than during the day. Women excrete their carbon monoxide faster than men. A random COHb estimation gives a good indication of the mean COHb level of an individual. PMID:4441877

  15. Development of a Compendium of Energy Expenditures for Youth

    PubMed Central

    Ridley, Kate; Ainsworth, Barbara E; Olds, Tim S

    2008-01-01

    Background This paper presents a Compendium of Energy Expenditures for use in scoring physical activity questionnaires and estimating energy expenditure levels in youth. Method/Results Modeled after the adult Compendium of Physical Activities, the Compendium of Energy Expenditures for Youth contains a list of over 200 activities commonly performed by youth and their associated MET intensity levels. A review of existing data collected on the energy cost of youth performing activities was undertaken and incorporated into the compendium. About 35% of the activity MET levels were derived from energy cost data measured in youth and the remaining MET levels estimated from the adult compendium. Conclusion The Compendium of Energy Expenditures for Youth is useful to researchers and practitioners interested in identifying physical activity and energy expenditure values in children and adolescents in a variety of settings. PMID:18782458

  16. Comparison of variance estimators for meta-analysis of instrumental variable estimates

    PubMed Central

    Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F

    2016-01-01

    Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262

  17. Guidelines for a priori grouping of species in hierarchical community models

    USGS Publications Warehouse

    Pacifici, Krishna; Zipkin, Elise; Collazo, Jaime; Irizarry, Julissa I.; DeWan, Amielle A.

    2014-01-01

    Recent methodological advances permit the estimation of species richness and occurrences for rare species by linking species-level occurrence models at the community level. The value of such methods is underscored by the ability to examine the influence of landscape heterogeneity on species assemblages at large spatial scales. A salient advantage of community-level approaches is that parameter estimates for data-poor species are more precise as the estimation process borrows from data-rich species. However, this analytical benefit raises a question about the degree to which inferences are dependent on the implicit assumption of relatedness among species. Here, we assess the sensitivity of community/group-level metrics, and individual-level species inferences given various classification schemes for grouping species assemblages using multispecies occurrence models. We explore the implications of these groupings on parameter estimates for avian communities in two ecosystems: tropical forests in Puerto Rico and temperate forests in northeastern United States. We report on the classification performance and extent of variability in occurrence probabilities and species richness estimates that can be observed depending on the classification scheme used. We found estimates of species richness to be most precise and to have the best predictive performance when all of the data were grouped at a single community level. Community/group-level parameters appear to be heavily influenced by the grouping criteria, but were not driven strictly by total number of detections for species. We found different grouping schemes can provide an opportunity to identify unique assemblage responses that would not have been found if all of the species were analyzed together. We suggest three guidelines: (1) classification schemes should be determined based on study objectives; (2) model selection should be used to quantitatively compare different classification approaches; and (3) sensitivity of results to different classification approaches should be assessed. These guidelines should help researchers apply hierarchical community models in the most effective manner.

  18. Pre-breeding blood urea nitrogen concentration and reproductive performance of Bonsmara heifers within different management systems.

    PubMed

    Tshuma, Takula; Holm, Dietmar Erik; Fosgate, Geoffrey Theodore; Lourens, Dirk Cornelius

    2014-08-01

    This study investigated the association between pre-breeding blood urea nitrogen (BUN) concentration and reproductive performance of beef heifers within different management systems in South Africa. Bonsmara heifers (n = 369) from five herds with different estimated levels of nitrogen intake during the month prior to the commencement of the breeding season were sampled in November and December 2010 to determine BUN concentrations. Body mass, age, body condition score (BCS) and reproductive tract score (RTS) were recorded at study enrolment. Trans-rectal ultrasound and/or palpation was performed 4-8 weeks after a 3-month breeding season to estimate the stage of pregnancy. Days to pregnancy (DTP) was defined as the number of days from the start of the breeding season until the estimated conception date. Logistic regression and Cox proportional hazards survival analysis were performed to estimate the association of pre-breeding BUN concentration with subsequent pregnancy and DTP, respectively. After stratifying for herd and adjusting for age, heifers with relatively higher pre-breeding BUN concentration took longer to become pregnant when compared to those with relatively lower BUN concentration (P = 0.011). In the herd with the highest estimated nitrogen intake (n = 143), heifers with relatively higher BUN were less likely to become pregnant (P = 0.013) and if they did, it was only later during the breeding season (P = 0.017), after adjusting for body mass. These associations were not present in the herd (n = 106) with the lowest estimated nitrogen intake (P > 0.500). It is concluded that Bonsmara heifers with relatively higher pre-breeding BUN concentration, might be at a disadvantage because of this negative impact on reproductive performance, particularly when the production system includes high levels of nitrogen intake.

  19. Inertial Measurements for Aero-assisted Navigation (IMAN)

    NASA Technical Reports Server (NTRS)

    Jah, Moriba; Lisano, Michael; Hockney, George

    2007-01-01

    IMAN is a Python tool that provides inertial sensor-based estimates of spacecraft trajectories within an atmospheric influence. It provides Kalman filter-derived spacecraft state estimates based upon data collected onboard, and is shown to perform at a level comparable to the conventional methods of spacecraft navigation in terms of accuracy and at a higher level with regard to the availability of results immediately after completion of an atmospheric drag pass.

  20. Performance of dense digital surface models based on image matching in the estimation of plot-level forest variables

    NASA Astrophysics Data System (ADS)

    Nurminen, Kimmo; Karjalainen, Mika; Yu, Xiaowei; Hyyppä, Juha; Honkavaara, Eija

    2013-09-01

    Recent research results have shown that the performance of digital surface model extraction using novel high-quality photogrammetric images and image matching is a highly competitive alternative to laser scanning. In this article, we proceed to compare the performance of these two methods in the estimation of plot-level forest variables. Dense point clouds extracted from aerial frame images were used to estimate the plot-level forest variables needed in a forest inventory covering 89 plots. We analyzed images with 60% and 80% forward overlaps and used test plots with off-nadir angles of between 0° and 20°. When compared to reference ground measurements, the airborne laser scanning (ALS) data proved to be the most accurate: it yielded root mean square error (RMSE) values of 6.55% for mean height, 11.42% for mean diameter, and 20.72% for volume. When we applied a forward overlap of 80%, the corresponding results from aerial images were 6.77% for mean height, 12.00% for mean diameter, and 22.62% for volume. A forward overlap of 60% resulted in slightly deteriorated RMSE values of 7.55% for mean height, 12.20% for mean diameter, and 22.77% for volume. According to our results, the use of higher forward overlap produced only slightly better results in the estimation of these forest variables. Additionally, we found that the estimation accuracy was not significantly impacted by the increase in the off-nadir angle. Our results confirmed that digital aerial photographs were about as accurate as ALS in forest resources estimation as long as a terrain model was available.

  1. Validation of the work and health interview.

    PubMed

    Stewart, Walter F; Ricci, Judith A; Leotta, Carol; Chee, Elsbeth

    2004-01-01

    Instruments that measure the impact of illness on work do not usually provide a measure that can be directly translated into lost hours or costs. We describe the validation of the Work and Health Interview (WHI), a questionnaire that provides a measure of lost productive time (LPT) from work absence and reduced performance at work. A sample (n = 67) of inbound phone call agents was recruited for the study. Validity of the WHI was assessed over a 2-week period in reference to workplace data (i.e. absence time, time away from call station and electronic continuous performance) and repeated electronic diary data (n = 48) obtained approximately eight times a day to estimate time not working (i.e. a component of reduced performance). The mean (median) missed work time estimate for any reason was 11 (8.0) and 12.9 (8.0) hours in a 2-week period from the WHI and workplace data, respectively, with a Pearson's (Spearman's) correlation of 0.84 (0.76). The diary-based mean (median) estimate of time not working while at work was 3.9 (2.8) hours compared with the WHI estimate of 5.7 (3.2) hours with a Pearson's (Spearman's) correlation of 0.19 (0.33). The 2-week estimate of total productive time from the diary was 67.2 hours compared with 67.8 hours from the WHI, with a Pearson's (Spearman's) correlation of 0.50 (0.46). At a population level, the WHI provides an accurate estimate of missed time from work and total productive time when compared with workplace and diary estimates. At an individual level, the WHI measure of total missed time, but not reduced performance time, is moderately accurate.

  2. The effects of competition on efficiency of electricity generation: A post-PURPA analysis

    NASA Astrophysics Data System (ADS)

    Jordan, Paula Faye

    1998-10-01

    The central issue of this research is the effects increased market competition has on production efficiency. Specifically, the research focuses upon measuring the relative level of efficiency in the generation of electricity in 1978 and 1993. It is hypothesized that the Public Utilities Regulatory Policy Act (PURPA), passed by Congress in 1978, made progress toward achieving its legislative intent of increasing competition, and therefore increased efficiency, in the generation of electricity. The methodology used to measure levels of efficiency in this research is the stochastic statistical estimator with the functional form of the translog production function. The models are then estimated using the maximum likelihood estimating technique using plant level data of coal generating units in the U.S. for 1978 and 1993. Results from the estimation of these models indicate that: (a) For the technical efficiency measures, the 1978 data set out performed the 1993 data set for the OTE and OTE of Fuel measures; (b) the 1993 data set was relatively more efficient in the OTE of Capital and the OTE of Labor when compared to the 1978 data set; (c) The 1993 observations indicated a relatively greater level of efficiency over 1978 in the OAE, OAE of Fuel, and OAE of Capital measures; (d) The OAE of Labor measure findings supported the 1978 observations as more efficient when compared to the 1993 set of observations; (e) When looking at the top and bottom ranked sites within each data set, the results indicated that sites which were top or poor performers for the technical and allocative efficiency measures tended to be a top or poor performer for the overall, fuel, and capital measures. The sites that appeared as a top or poor performer of labor measures within the technical and allocative groups were often unique and didn't necessarily appear as a top or poor performer in the other efficiency measures.

  3. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  4. Toward Robust Estimation of the Components of Forest Population Change

    Treesearch

    Francis A. Roesch

    2014-01-01

    Multiple levels of simulation are used to test the robustness of estimators of the components of change. I first created a variety of spatial-temporal populations based on, but more variable than, an actual forest monitoring data set and then sampled those populations under a variety of sampling error structures. The performance of each of four estimation approaches is...

  5. Relationships between testosterone levels and cognition in patients with Alzheimer disease and nondemented elderly men.

    PubMed

    Seidl, Jennifer N Travis; Massman, Paul J

    2015-03-01

    Previous research suggests that low levels of testosterone may be associated with the development of Alzheimer disease (AD), as well as poorer performance on certain neuropsychological tests and increased risk of depression. This study utilized data from 61 nondemented older men and 68 men with probable AD. Testosterone levels did not differ between the groups. Regression analyses in men with AD revealed that testosterone levels did not significantly predict performance on neuropsychological tests or a measure of depression. Among controls, testosterone levels predicted estimated premorbid verbal IQ and performance on a verbal fluency test. Findings suggest that testosterone is not associated with most neuropsychological test performances in patients with AD. © The Author(s) 2014.

  6. Using satellite observations in performance evaluation for regulatory air quality modeling: Comparison with ground-level measurements

    NASA Astrophysics Data System (ADS)

    Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.

    2012-12-01

    Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite retrievals. Evaluation results are assessed against recommended criteria and peer studies in the literature. Further analysis is conducted, based upon these assessments, to discover likely errors in model inputs and potential deficiencies in the model itself. Correlations as well as differences in input errors and model deficiencies revealed by ground-level measurements versus satellite observations are discussed. Additionally, sensitivity analyses are employed to investigate errors in emission-rate estimates using either ground-level measurements or satellite retrievals, and the results are compared against each other considering observational uncertainties. Recommendations are made for how to effectively utilize satellite retrievals in regulatory air quality modeling.

  7. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties

    PubMed Central

    Xiaopeng, QI; Liang, WEI; BARKER, Laurie; LEKIACHVILI, Akaki; Xingyou, ZHANG

    2015-01-01

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature’s association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly—or 30-day—basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R2, mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS’s merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects. PMID:26167169

  8. Is talk "cheap"? An initial investigation of the equivalence of alcohol purchase task performance for hypothetical and actual rewards.

    PubMed

    Amlung, Michael T; Acker, John; Stojek, Monika K; Murphy, James G; MacKillop, James

    2012-04-01

    Behavioral economic alcohol purchase tasks (APTs) are self-report measures of alcohol demand that assess estimated consumption at escalating levels of price. However, the relationship between estimated performance for hypothetical outcomes and choices for actual outcomes has not been determined. The present study examined both the correspondence between choices for hypothetical and actual outcomes, and the correspondence between estimated alcohol consumption and actual drinking behavior. A collateral goal of the study was to examine the effects of alcohol cues on APT performance. Forty-one heavy-drinking adults (56% men) participated in a human laboratory protocol comprising APTs for hypothetical and actual alcohol and money, an alcohol cue reactivity paradigm, an alcohol self-administration period, and a recovery period. Pearson correlations revealed very high correspondence between APT performance for hypothetical and actual alcohol (ps < 0.001). Estimated consumption on the APT was similarly strongly associated with actual consumption during the self-administration period (r = 0.87, p < 0.001). Exposure to alcohol cues significantly increased subjective craving and arousal and had a trend-level effect on intensity of demand, in spite of notable ceiling effects. Associations among motivational indices were highly variable, suggesting multidimensionality. These results suggest there may be close correspondence both between value preferences for hypothetical alcohol and actual alcohol, and between estimated consumption and actual consumption. Methodological considerations and priorities for future studies are discussed. Copyright © 2011 by the Research Society on Alcoholism.

  9. Estimating learning outcomes from pre- and posttest student self-assessments: a longitudinal study.

    PubMed

    Schiekirka, Sarah; Reinhardt, Deborah; Beißbarth, Tim; Anders, Sven; Pukrop, Tobias; Raupach, Tobias

    2013-03-01

    Learning outcome is an important measure for overall teaching quality and should be addressed by comprehensive evaluation tools. The authors evaluated the validity of a novel evaluation tool based on student self-assessments, which may help identify specific strengths and weaknesses of a particular course. In 2011, the authors asked 145 fourth-year students at Göttingen Medical School to self-assess their knowledge on 33 specific learning objectives in a pretest and posttest as part of a cardiorespiratory module. The authors compared performance gain calculated from self-assessments with performance gain derived from formative examinations that were closely matched to these 33 learning objectives. Eighty-three students (57.2%) completed the assessment. There was good agreement between performance gain derived from subjective data and performance gain derived from objective examinations (Pearson r=0.78; P<.0001) on the group level. The association between the two measures was much weaker when data were analyzed on the individual level. Further analysis determined a quality cutoff for performance gain derived from aggregated student self-assessments. When using this cutoff, the evaluation tool was highly sensitive in identifying specific learning objectives with favorable or suboptimal objective performance gains. The tool is easy to implement, takes initial performance levels into account, and does not require extensive pre-post testing. By providing valid estimates of actual performance gain obtained during a teaching module, it may assist medical teachers in identifying strengths and weaknesses of a particular course on the level of specific learning objectives.

  10. Study of design constraints on helicopter noise

    NASA Technical Reports Server (NTRS)

    Sternfeld, H., Jr.; Wiedersum, C. W.

    1979-01-01

    A means of estimating the noise generated by a helicopter main rotor using information which is generally available during the preliminary design phase of aircraft development is presented. The method utilizes design charts and tables which do not require an understanding of acoustical theory or computational procedures in order to predict the perceived noise level, a weighted sound pressure level, or C weighted sound pressure level of a single hovering rotor. A method for estimating the effective perceived noise level in forward flight is also included. In order to give the designer an assessment of the relative rotor performance, which may be traded off against noise, an additional chart for estimating the percent of available rotor thrust which must be expended in lifting the rotor and drive system, is included as well as approach for comparing the subjective acceptability of various rotors once the absolute sound pressure levels are predicted.

  11. Performance seeking control (PSC) for the F-15 highly integrated digital electronic control (HIDEC) aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.

    1995-01-01

    The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.

  12. Geostatistical estimation of forest biomass in interior Alaska combining Landsat-derived tree cover, sampled airborne lidar and field observations

    NASA Astrophysics Data System (ADS)

    Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik

    2018-06-01

    The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.

  13. Development testing of the advanced photovoltaic solar array

    NASA Technical Reports Server (NTRS)

    Stella, P. M.; Kurland, R. M.

    1991-01-01

    The latest design, fabrication and testing details of a prototype wing are discussed. Estimates of array-level performance are presented as a function of power level and solar cell technology for geosynchronous orbit (GEO) missions and solar electric propulsion missions through the Van Allen radiation belts. Design concepts are discussed that would allow the wing to be self-retractable and restowable. To date all testing has verified the feasibility and mechanical/electrical integrity of the baseline design. The beginning-of-life (BOL) specific power estimate for a nominal 10-kW (BOL) array is about 138 W/kg, with corresponding end-of-life (EOL) performance of about 93 W/kg for a 10-year GEO mission.

  14. Small-Area Estimation of Spatial Access to Care and Its Implications for Policy.

    PubMed

    Gentili, Monica; Isett, Kim; Serban, Nicoleta; Swann, Julie

    2015-10-01

    Local or small-area estimates to capture emerging trends across large geographic regions are critical in identifying and addressing community-level health interventions. However, they are often unavailable due to lack of analytic capabilities in compiling and integrating extensive datasets and complementing them with the knowledge about variations in state-level health policies. This study introduces a modeling approach for small-area estimation of spatial access to pediatric primary care that is data "rich" and mathematically rigorous, integrating data and health policy in a systematic way. We illustrate the sensitivity of the model to policy decision making across large geographic regions by performing a systematic comparison of the estimates at the census tract and county levels for Georgia and California. Our results show the proposed approach is able to overcome limitations of other existing models by capturing patient and provider preferences and by incorporating possible changes in health policies. The primary finding is systematic underestimation of spatial access, and inaccurate estimates of disparities across population and across geography at the county level with respect to those at the census tract level with implications on where to focus and which type of interventions to consider.

  15. Polarization-Analyzing CMOS Image Sensor With Monolithically Embedded Polarizer for Microchemistry Systems.

    PubMed

    Tokuda, T; Yamada, H; Sasagawa, K; Ohta, J

    2009-10-01

    This paper proposes and demonstrates a polarization-analyzing CMOS sensor based on image sensor architecture. The sensor was designed targeting applications for chiral analysis in a microchemistry system. The sensor features a monolithically embedded polarizer. Embedded polarizers with different angles were implemented to realize a real-time absolute measurement of the incident polarization angle. Although the pixel-level performance was confirmed to be limited, estimation schemes based on the variation of the polarizer angle provided a promising performance for real-time polarization measurements. An estimation scheme using 180 pixels in a 1deg step provided an estimation accuracy of 0.04deg. Polarimetric measurements of chiral solutions were also successfully performed to demonstrate the applicability of the sensor to optical chiral analysis.

  16. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  17. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  18. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  19. Comparison study on disturbance estimation techniques in precise slow motion control

    NASA Astrophysics Data System (ADS)

    Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.

    2010-08-01

    Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.

  20. A fast signal subspace approach for the determination of absolute levels from phased microphone array measurements

    NASA Astrophysics Data System (ADS)

    Sarradj, Ennes

    2010-04-01

    Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.

  1. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  2. An Empirical Comparison of Discrete Choice Experiment and Best-Worst Scaling to Estimate Stakeholders' Risk Tolerance for Hip Replacement Surgery.

    PubMed

    van Dijk, Joris D; Groothuis-Oudshoorn, Catharina G M; Marshall, Deborah A; IJzerman, Maarten J

    2016-06-01

    Previous studies have been inconclusive regarding the validity and reliability of preference elicitation methods. The aim of this study was to compare the metrics obtained from a discrete choice experiment (DCE) and profile-case best-worst scaling (BWS) with respect to hip replacement. We surveyed the general US population of men aged 45 to 65 years, and potentially eligible for hip replacement surgery. The survey included sociodemographic questions, eight DCE questions, and twelve BWS questions. Attributes were the probability of a first and second revision, pain relief, ability to participate in sports and perform daily activities, and length of hospital stay. Conditional logit analysis was used to estimate attribute weights, level preferences, and the maximum acceptable risk (MAR) for undergoing revision surgery in six hypothetical treatment scenarios with different attribute levels. A total of 429 (96%) respondents were included. Comparable attribute weights and level preferences were found for both BWS and DCE. Preferences were greatest for hip replacement surgery with high pain relief and the ability to participate in sports and perform daily activities. Although the estimated MARs for revision surgery followed the same trend, the MARs were systematically higher in five of the six scenarios using DCE. This study confirms previous findings that BWS or DCEs are comparable in estimating attribute weights and level preferences. However, the risk tolerance threshold based on the estimation of MAR differs between these methods, possibly leading to inconsistency in comparing treatment scenarios. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Web-based Tsunami Early Warning System: a case study of the 2010 Kepulaunan Mentawai Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Ulutas, E.; Inan, A.; Annunziato, A.

    2012-06-01

    This study analyzes the response of the Global Disasters Alerts and Coordination System (GDACS) in relation to a case study: the Kepulaunan Mentawai earthquake and related tsunami, which occurred on 25 October 2010. The GDACS, developed by the European Commission Joint Research Center, combines existing web-based disaster information management systems with the aim to alert the international community in case of major disasters. The tsunami simulation system is an integral part of the GDACS. In more detail, the study aims to assess the tsunami hazard on the Mentawai and Sumatra coasts: the tsunami heights and arrival times have been estimated employing three propagation models based on the long wave theory. The analysis was performed in three stages: (1) pre-calculated simulations by using the tsunami scenario database for that region, used by the GDACS system to estimate the alert level; (2) near-real-time simulated tsunami forecasts, automatically performed by the GDACS system whenever a new earthquake is detected by the seismological data providers; and (3) post-event tsunami calculations using GCMT (Global Centroid Moment Tensor) fault mechanism solutions proposed by US Geological Survey (USGS) for this event. The GDACS system estimates the alert level based on the first type of calculations and on that basis sends alert messages to its users; the second type of calculations is available within 30-40 min after the notification of the event but does not change the estimated alert level. The third type of calculations is performed to improve the initial estimations and to have a better understanding of the extent of the possible damage. The automatic alert level for the earthquake was given between Green and Orange Alert, which, in the logic of GDACS, means no need or moderate need of international humanitarian assistance; however, the earthquake generated 3 to 9 m tsunami run-up along southwestern coasts of the Pagai Islands where 431 people died. The post-event calculations indicated medium-high humanitarian impacts.

  4. Oral health status and academic performance among Ohio third-graders, 2009-2010.

    PubMed

    Detty, Amber M R; Oza-Frank, Reena

    2014-01-01

    Although recent literature indicated an association between dental caries and poor academic performance, previous work relied on self-reported measures. This analysis sought to determine the association between academic performance and untreated dental caries (tooth decay) using objective measures, controlling for school-level characteristics. School-level untreated caries prevalence was estimated from a 2009-2010 oral health survey of Ohio third-graders. Prevalence estimates were combined with school-level academic performance and other school characteristics obtained from the Ohio Department of Education. Linear regression models were developed as a result of bivariate testing, and final models were stratified based upon the presence of a school-based dental sealant program (SBSP). Preliminary bivariate analysis indicated a significant relationship between untreated caries and academic performance, which was more pronounced at schools with an SBSP. After controlling for other school characteristics, the prevalence of untreated caries was found to be a significant predictor of academic performance at schools without an SBSP (P=0.001) but not at schools with an SBSP (P=0.833). The results suggest the association between untreated caries and academic performance may be affected by the presence of a school-based oral health program. Further research focused on oral health and academic performance should consider the presence and/or availability of these programs. © 2014 American Association of Public Health Dentistry.

  5. A simulation study on Bayesian Ridge regression models for several collinearity levels

    NASA Astrophysics Data System (ADS)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  6. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  7. An index approach to performance-based payments for water quality.

    PubMed

    Maille, Peter; Collins, Alan R

    2012-05-30

    In this paper we describe elements of a field research project that presented farmers with economic incentives to control nitrate runoff. The approach used is novel in that payments are based on ambient water quality and water quantity produced by a watershed rather than proxies for water quality conservation. Also, payments are made based on water quality relative to a control watershed, and therefore, account for stochastic fluctuations in background nitrate levels. Finally, the program pays farmers as a group to elicit team behavior. We present our approach to modeling that allowed us to estimate prices for water and resulting payment levels. We then compare these preliminary estimates to the actual values recorded over 33 months of fieldwork. We find that our actual payments were 29% less than our preliminary estimates, due in part to the failure of our ecological model to estimate discharge accurately. Despite this shortfall, the program attracted the participation of 53% of the farmers in the watershed, and resulted in substantial nitrate abatement activity. Given this favorable response, we propose that research efforts focus on implementing field trials of group-level performance-based payments. Ideally these programs would be low risk and control for naturally occurring contamination. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Estimating glomerular filtration rate in diabetes: a comparison of cystatin-C- and creatinine-based methods.

    PubMed

    Macisaac, R J; Tsalamandris, C; Thomas, M C; Premaratne, E; Panagiotopoulos, S; Smith, T J; Poon, A; Jenkins, M A; Ratnaike, S I; Power, D A; Jerums, G

    2006-07-01

    We compared the predictive performance of a GFR based on serum cystatin C levels with commonly used creatinine-based methods in subjects with diabetes. In a cross-sectional study of 251 consecutive clinic patients, the mean reference (plasma clearance of (99m)Tc-diethylene-triamine-penta-acetic acid) GFR (iGFR) was 88+/-2 ml min(-1) 1.73 m(-2). A regression equation describing the relationship between iGFR and 1/cystatin C levels was derived from a test population (n=125) to allow for the estimation of GFR by cystatin C (eGFR-cystatin C). The predictive performance of eGFR-cystatin C, the Modification of Diet in Renal Disease 4 variable formula (MDRD-4) and Cockcroft-Gault (C-G) formulas were then compared in a validation population (n=126). There was no difference in renal function (ml min(-1) 1.73 m(-2)) as measured by iGFR (89.2+/-3.0), eGFR-cystatin C (86.8+/-2.5), MDRD-4 (87.0+/-2.8) or C-G (92.3+/-3.5). All three estimates of renal function had similar precision and accuracy. Estimates of GFR based solely on serum cystatin C levels had the same predictive potential when compared with the MDRD-4 and C-G formulas.

  9. A systematic evaluation of different methods for calculating adolescent vaccination levels using immunization information system data.

    PubMed

    Gowda, Charitha; Dong, Shiming; Potter, Rachel C; Dombkowski, Kevin J; Stokley, Shannon; Dempsey, Amanda F

    2013-01-01

    Immunization information systems (IISs) are valuable surveillance tools; however, population relocation may introduce bias when determining immunization coverage. We explored alternative methods for estimating the vaccine-eligible population when calculating adolescent immunization levels using a statewide IIS. We performed a retrospective analysis of the Michigan State Care Improvement Registry (MCIR) for all adolescents aged 11-18 years registered in the MCIR as of October 2010. We explored four methods for determining denominators: (1) including all adolescents with MCIR records, (2) excluding adolescents with out-of-state residence, (3) further excluding those without MCIR activity ≥ 10 years prior to the evaluation date, and (4) using a denominator based on U.S. Census data. We estimated state- and county-specific coverage levels for four adolescent vaccines. We found a 20% difference in estimated vaccination coverage between the most inclusive and restrictive denominator populations. Although there was some variability among the four methods in vaccination at the state level (2%-11%), greater variation occurred at the county level (up to 21%). This variation was substantial enough to potentially impact public health assessments of immunization programs. Generally, vaccines with higher coverage levels had greater absolute variation, as did counties with smaller populations. At the county level, using the four denominator calculation methods resulted in substantial differences in estimated adolescent immunization rates that were less apparent when aggregated at the state level. Further research is needed to ascertain the most appropriate method for estimating vaccine coverage levels using IIS data.

  10. Performance-based methodology for assessing seismic vulnerability and capacity of buildings

    NASA Astrophysics Data System (ADS)

    Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li

    2010-06-01

    This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.

  11. Full-scale 3-D finite element modeling of a two-loop pressurized water reactor for heat transfer, thermal–mechanical cyclic stress analysis, and environmental fatigue life estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Soppet, William K.; Majumdar, Saurindranath

    This paper discusses a system-level finite element model of a two-loop pressurized water reactor (PWR). Based on this model, system-level heat transfer analysis and subsequent sequentially coupled thermal-mechanical stress analysis were performed for typical thermal-mechanical fatigue cycles. The in-air fatigue lives of example components, such as the hot and cold legs, were estimated on the basis of stress analysis results, ASME in-air fatigue life estimation criteria, and fatigue design curves. Furthermore, environmental correction factors and associated PWR environment fatigue lives for the hot and cold legs were estimated by using estimated stress and strain histories and the approach described inmore » US-NRC report: NUREG-6909.« less

  12. Characterization of turbulence stability through the identification of multifractional Brownian motions

    NASA Astrophysics Data System (ADS)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  13. The first step toward genetic selection for host tolerance to infectious pathogens: obtaining the tolerance phenotype through group estimates

    PubMed Central

    Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias

    2012-01-01

    Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID:23412990

  14. Requirements Flowdown for Prognostics and Health Management

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita

    2012-01-01

    Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.

  15. Methodology for conceptual remote sensing spacecraft technology: insertion analysis balancing performance, cost, and risk

    NASA Astrophysics Data System (ADS)

    Bearden, David A.; Duclos, Donald P.; Barrera, Mark J.; Mosher, Todd J.; Lao, Norman Y.

    1997-12-01

    Emerging technologies and micro-instrumentation are changing the way remote sensing spacecraft missions are developed and implemented. Government agencies responsible for procuring space systems are increasingly requesting analyses to estimate cost, performance and design impacts of advanced technology insertion for both state-of-the-art systems as well as systems to be built 5 to 10 years in the future. Numerous spacecraft technology development programs are being sponsored by Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) agencies with the goal of enhancing spacecraft performance, reducing mass, and reducing cost. However, it is often the case that technology studies, in the interest of maximizing subsystem-level performance and/or mass reduction, do not anticipate synergistic system-level effects. Furthermore, even though technical risks are often identified as one of the largest cost drivers for space systems, many cost/design processes and models ignore effects of cost risk in the interest of quick estimates. To address these issues, the Aerospace Corporation developed a concept analysis methodology and associated software tools. These tools, collectively referred to as the concept analysis and design evaluation toolkit (CADET), facilitate system architecture studies and space system conceptual designs focusing on design heritage, technology selection, and associated effects on cost, risk and performance at the system and subsystem level. CADET allows: (1) quick response to technical design and cost questions; (2) assessment of the cost and performance impacts of existing and new designs/technologies; and (3) estimation of cost uncertainties and risks. These capabilities aid mission designers in determining the configuration of remote sensing missions that meet essential requirements in a cost- effective manner. This paper discuses the development of CADET modules and their application to several remote sensing satellite mission concepts.

  16. Cost projections for Redox Energy storage systems

    NASA Technical Reports Server (NTRS)

    Michaels, K.; Hall, G.

    1980-01-01

    A preliminary design and system cost analysis was performed for the redox energy storage system. A conceptual design and cost estimate was prepared for each of two energy applications: (1) electric utility 100-MWh requirement (10-MW for ten hours) for energy storage for utility load leveling application, and (2) a 500-kWh requirement (10-kW for 50 hours) for use with a variety of residential or commercial applications, including stand alone solar photovoltaic systems. The conceptual designs were based on cell performance levels, system design parameters, and special material costs. These data were combined with estimated thermodynamic and hydraulic analysis to provide preliminary system designs. Results indicate that the redox cell stack to be amenable to mass production techniques with a relatively low material cost.

  17. Correlation of Apollo oxygen tank thermodynamic performance predictions

    NASA Technical Reports Server (NTRS)

    Patterson, H. W.

    1971-01-01

    Parameters necessary to analyze the stratified performance of the Apollo oxygen tanks include g levels, tank elasticity, flow rates and pressurized volumes. Methods for estimating g levels and flow rates from flight plans prior to flight, and from quidance and system data for use in the post flight analysis are described. Equilibrium thermodynamic equations are developed for the effects of tank elasticity and pressurized volumes on the tank pressure response and their relative magnitudes are discussed. Correlations of tank pressures and heater temperatures from flight data with the results of a stratification model are shown. Heater temperatures were also estimated with empirical heat transfer agreement with flight data when fluid properties were averaged rather than evaluated at the mean film temperature.

  18. Advanced Propfan Engine Technology (APET) and Single-rotation Gearbox/Pitch Change Mechanism

    NASA Technical Reports Server (NTRS)

    Sargisson, D. F.

    1985-01-01

    The projected performance, in the 1990's time period, of the equivalent technology level high bypass ratio turbofan powered aircraft (at the 150 passenger size) is compared with advanced turboprop propulsion systems. Fuel burn analysis, economic analysis, and pollution (noise, emissions) estimates were made. Three different cruise Mach numbers were investigated for both the turbofan and the turboprop systems. Aerodynamic design and performance estimates were made for nacelles, inlets, and exhaust systems. Air to oil heat exchangers were investigated for oil cooling advanced gearboxes at the 12,500 SHP level. The results and conclusions are positive in that high speed turboprop aircraft will exhibit superior fuel burn characteristics and lower operating costs when compared with equivalent technology turbofan aircraft.

  19. Six degree-of-freedom analysis of hip, knee, ankle and foot provides updated understanding of biomechanical work during human walking.

    PubMed

    Zelik, Karl E; Takahashi, Kota Z; Sawicki, Gregory S

    2015-03-01

    Measuring biomechanical work performed by humans and other animals is critical for understanding muscle-tendon function, joint-specific contributions and energy-saving mechanisms during locomotion. Inverse dynamics is often employed to estimate joint-level contributions, and deformable body estimates can be used to study work performed by the foot. We recently discovered that these commonly used experimental estimates fail to explain whole-body energy changes observed during human walking. By re-analyzing previously published data, we found that about 25% (8 J) of total positive energy changes of/about the body's center-of-mass and >30% of the energy changes during the Push-off phase of walking were not explained by conventional joint- and segment-level work estimates, exposing a gap in our fundamental understanding of work production during gait. Here, we present a novel Energy-Accounting analysis that integrates various empirical measures of work and energy to elucidate the source of unexplained biomechanical work. We discovered that by extending conventional 3 degree-of-freedom (DOF) inverse dynamics (estimating rotational work about joints) to 6DOF (rotational and translational) analysis of the hip, knee, ankle and foot, we could fully explain the missing positive work. This revealed that Push-off work performed about the hip may be >50% greater than conventionally estimated (9.3 versus 6.0 J, P=0.0002, at 1.4 m s(-1)). Our findings demonstrate that 6DOF analysis (of hip-knee-ankle-foot) better captures energy changes of the body than more conventional 3DOF estimates. These findings refine our fundamental understanding of how work is distributed within the body, which has implications for assistive technology, biomechanical simulations and potentially clinical treatment. © 2015. Published by The Company of Biologists Ltd.

  20. Manufacturing Cost Levelization Model – A User’s Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, William R.; Shehabi, Arman; Smith, Sarah Josephine

    The Manufacturing Cost Levelization Model is a cost-performance techno-economic model that estimates total large-scale manufacturing costs for necessary to produce a given product. It is designed to provide production cost estimates for technology researchers to help guide technology research and development towards an eventual cost-effective product. The model presented in this user’s guide is generic and can be tailored to the manufacturing of any product, including the generation of electricity (as a product). This flexibility, however, requires the user to develop the processes and process efficiencies that represents a full-scale manufacturing facility. The generic model is comprised of several modulesmore » that estimate variable costs (material, labor, and operating), fixed costs (capital & maintenance), financing structures (debt and equity financing), and tax implications (taxable income after equipment and building depreciation, debt interest payments, and expenses) of a notional manufacturing plant. A cash-flow method is used to estimate a selling price necessary for the manufacturing plant to recover its total cost of production. A levelized unit sales price ($ per unit of product) is determined by dividing the net-present value of the manufacturing plant’s expenses ($) by the net present value of its product output. A user defined production schedule drives the cash-flow method that determines the levelized unit price. In addition, an analyst can increase the levelized unit price to include a gross profit margin to estimate a product sales price. This model allows an analyst to understand the effect that any input variables could have on the cost of manufacturing a product. In addition, the tool is able to perform sensitivity analysis, which can be used to identify the key variables and assumptions that have the greatest influence on the levelized costs. This component is intended to help technology researchers focus their research attention on tasks that offer the greatest opportunities for cost reduction early in the research and development stages of technology invention.« less

  1. Effect of Risk of Bias on the Effect Size of Meta-Analytic Estimates in Randomized Controlled Trials in Periodontology and Implant Dentistry.

    PubMed

    Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang

    2015-01-01

    Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association.

  2. Residential exposure to air toxics is linked to lower grade point averages among school children in El Paso, Texas, USA

    PubMed Central

    Clark-Reyna, Stephanie E.; Grineski, Sara E.; Collins, Timothy W.

    2015-01-01

    Children in low-income neighborhoods tend to be disproportionately exposed to environmental toxicants. This is cause for concern because exposure to environmental toxicants negatively affect health, which can impair academic success. To date, it is unknown if associations between air toxics and academic performance found in previous school-level studies persist when studying individual children. In pairing the National Air Toxics Assessment (NATA) risk estimates for respiratory and diesel particulate matter risk disaggregated by source, with individual-level data collected through a mail survey, this paper examines the effects of exposure to residential environmental toxics on academic performance for individual children for the first time and adjusts for school-level effects using generalized estimating equations. We find that higher levels of residential air toxics, especially those from non-road mobile sources, are statistically significantly associated with lower grade point averages among fourth and fifth grade school children in El Paso (Texas, USA). PMID:27034529

  3. Solid rocket motor cost model

    NASA Technical Reports Server (NTRS)

    Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.

    1972-01-01

    A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.

  4. Developing corridor-level truck travel time estimates and other freight performance measures from archived ITS data.

    DOT National Transportation Integrated Search

    2009-08-01

    The objectives of this research were to retrospectively study the feasibility for using truck transponder data to produce freight corridor performance measures (travel times) and real-time traveler information. To support this analysis, weigh-in-moti...

  5. Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA

    PubMed Central

    Michael, Andrew M.; Anderson, Mathew; Miller, Robyn L.; Adalı, Tülay; Calhoun, Vince D.

    2014-01-01

    Independent component analysis (ICA) is a widely applied technique to derive functionally connected brain networks from fMRI data. Group ICA (GICA) and Independent Vector Analysis (IVA) are extensions of ICA that enable users to perform group fMRI analyses; however a full comparison of the performance limits of GICA and IVA has not been investigated. Recent interest in resting state fMRI data with potentially higher degree of subject variability makes the evaluation of the above techniques important. In this paper we compare component estimation accuracies of GICA and an improved version of IVA using simulated fMRI datasets. We systematically change the degree of inter-subject spatial variability of components and evaluate estimation accuracy over all spatial maps (SMs) and time courses (TCs) of the decomposition. Our results indicate the following: (1) at low levels of SM variability or when just one SM is varied, both GICA and IVA perform well, (2) at higher levels of SM variability or when more than one SMs are varied, IVA continues to perform well but GICA yields SM estimates that are composites of other SMs with errors in TCs, (3) both GICA and IVA remove spatial correlations of overlapping SMs and introduce artificial correlations in their TCs, (4) if number of SMs is over estimated, IVA continues to perform well but GICA introduces artifacts in the varying and extra SMs with artificial correlations in the TCs of extra components, and (5) in the absence or presence of SMs unique to one subject, GICA produces errors in TCs and IVA estimates are accurate. In summary, our simulation experiments (both simplistic and realistic) and our holistic analyses approach indicate that IVA produces results that are closer to ground truth and thereby better preserves subject variability. The improved version of IVA is now packaged into the GIFT toolbox (http://mialab.mrn.org/software/gift). PMID:25018704

  6. Small area estimation (SAE) model: Case study of poverty in West Java Province

    NASA Astrophysics Data System (ADS)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  7. The Society of Thoracic Surgeons Composite Measure of Individual Surgeon Performance for Adult Cardiac Surgery: A Report of The Society of Thoracic Surgeons Quality Measurement Task Force.

    PubMed

    Shahian, David M; He, Xia; Jacobs, Jeffrey P; Kurlansky, Paul A; Badhwar, Vinay; Cleveland, Joseph C; Fazzalari, Frank L; Filardo, Giovanni; Normand, Sharon-Lise T; Furnary, Anthony P; Magee, Mitchell J; Rankin, J Scott; Welke, Karl F; Han, Jane; O'Brien, Sean M

    2015-10-01

    Previous composite performance measures of The Society of Thoracic Surgeons (STS) were estimated at the STS participant level, typically a hospital or group practice. The STS Quality Measurement Task Force has now developed a multiprocedural, multidimensional composite measure suitable for estimating the performance of individual surgeons. The development sample from the STS National Database included 621,489 isolated coronary artery bypass grafting procedures, isolated aortic valve replacement, aortic valve replacement plus coronary artery bypass grafting, mitral, or mitral plus coronary artery bypass grafting procedures performed by 2,286 surgeons between July 1, 2011, and June 30, 2014. Each surgeon's composite score combined their aggregate risk-adjusted mortality and major morbidity rates (each weighted inversely by their standard deviations) and reflected the proportion of case types they performed. Model parameters were estimated in a Bayesian framework. Composite star ratings were examined using 90%, 95%, or 98% Bayesian credible intervals. Measure reliability was estimated using various 3-year case thresholds. The final composite measure was defined as 0.81 × (1 minus risk-standardized mortality rate) + 0.19 × (1 minus risk-standardized complication rate). Risk-adjusted mortality (median, 2.3%; interquartile range, 1.7% to 3.0%), morbidity (median, 13.7%; interquartile range, 10.8% to 17.1%), and composite scores (median, 95.4%; interquartile range, 94.4% to 96.3%) varied substantially across surgeons. Using 98% Bayesian credible intervals, there were 207 1-star (lower performance) surgeons (9.1%), 1,701 2-star (as-expected performance) surgeons (74.4%), and 378 3-star (higher performance) surgeons (16.5%). With an eligibility threshold of 100 cases over 3 years, measure reliability was 0.81. The STS has developed a multiprocedural composite measure suitable for evaluating performance at the individual surgeon level. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  8. To what extent might N2 limit dive performance in king penguins?

    PubMed

    Fahlman, A; Schmidt, A; Jones, D R; Bostrom, B L; Handrich, Y

    2007-10-01

    A mathematical model was used to explore if elevated levels of N2, and risk of decompression sickness (DCS), could limit dive performance (duration and depth) in king penguins (Aptenodytes patagonicus). The model allowed prediction of blood and tissue (central circulation, muscle, brain and fat) N2 tensions (P(N2)) based on different cardiac outputs and blood flow distributions. Estimated mixed venous P(N2) agreed with values observed during forced dives in a compression chamber used to validate the assumptions of the model. During bouts of foraging dives, estimated mixed venous and tissue P(N2) increased as the bout progressed. Estimated mean maximum mixed venous P(N2) upon return to the surface after a dive was 4.56+/-0.18 atmospheres absolute (ATA; range: 4.37-4.78 ATA). This is equivalent to N2 levels causing a 50% DCS incidence in terrestrial animals of similar mass. Bout termination events were not associated with extreme mixed venous N2 levels. Fat P(N2) was positively correlated with bout duration and the highest estimated fat P(N2) occurred at the end of a dive bout. The model suggested that short and shallow dives occurring between dive bouts help to reduce supersaturation and thereby DCS risk. Furthermore, adipose tissue could also help reduce DCS risk during the first few dives in a bout by functioning as a sink to buffer extreme levels of N2.

  9. Comparison of exposure estimation methods for air pollutants: ambient monitoring data and regional air quality simulation.

    PubMed

    Bravo, Mercedes A; Fuentes, Montserrat; Zhang, Yang; Burr, Michael J; Bell, Michelle L

    2012-07-01

    Air quality modeling could potentially improve exposure estimates for use in epidemiological studies. We investigated this application of air quality modeling by estimating location-specific (point) and spatially-aggregated (county level) exposure concentrations of particulate matter with an aerodynamic diameter less than or equal to 2.5 μm (PM(2.5)) and ozone (O(3)) for the eastern U.S. in 2002 using the Community Multi-scale Air Quality (CMAQ) modeling system and a traditional approach using ambient monitors. The monitoring approach produced estimates for 370 and 454 counties for PM(2.5) and O(3), respectively. Modeled estimates included 1861 counties, covering 50% more population. The population uncovered by monitors differed from those near monitors (e.g., urbanicity, race, education, age, unemployment, income, modeled pollutant levels). CMAQ overestimated O(3) (annual normalized mean bias=4.30%), while modeled PM(2.5) had an annual normalized mean bias of -2.09%, although bias varied seasonally, from 32% in November to -27% in July. Epidemiology may benefit from air quality modeling, with improved spatial and temporal resolution and the ability to study populations far from monitors that may differ from those near monitors. However, model performance varied by measure of performance, season, and location. Thus, the appropriateness of using such modeled exposures in health studies depends on the pollutant and metric of concern, acceptable level of uncertainty, population of interest, study design, and other factors. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  11. Latest NASA Instrument Cost Model (NICM): Version VI

    NASA Technical Reports Server (NTRS)

    Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary

    2014-01-01

    The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.

  12. Health impact assessment and monetary valuation of IQ loss in pre-school children due to lead exposure through locally produced food.

    PubMed

    Bierkens, J; Buekers, J; Van Holderbeke, M; Torfs, R

    2012-01-01

    A case study has been performed which involved the full chain assessment from policy drivers to health effect quantification of lead exposure through locally produced food on loss of IQ in pre-school children at the population level across the EU-27, including monetary valuation of the estimated health impact. Main policy scenarios cover the period from 2000 to 2020 and include the most important Community policy developments expected to affect the environmental release of lead (Pb) and corresponding human exposure patterns. Three distinct scenarios were explored: the emission situation based on 2000 data, a business-as-usual scenario (BAU) up to 2010 and 2020 and a scenario incorporating the most likely technological change expected (Most Feasible Technical Reductions, MFTR) in response to current and future legislation. Consecutive model calculations (MSCE-HM, WATSON, XtraFOOD, IEUBK) were performed by different partners on the project as part of the full chain approach to derive estimates of blood lead (B-Pb) levels in children as a consequence of the consumption of local produce. The estimated B-Pb levels were translated into an average loss of IQ points/child using an empirical relationship based on a meta-analysis performed by Schwartz (1994). The calculated losses in IQ points were subsequently further translated into the average cost/child using a cost estimate of €10.000 per loss of IQ point based on data from a literature review. The estimated average reduction of cost/child (%) for all countries considered in 2010 under BAU and MFTR are 12.16 and 18.08% as compared to base line conditions, respectively. In 2020 the percentages amount to 20.19 and 23.39%. The case study provides an example of the full-chain impact pathway approach taking into account all foreseeable pathways both for assessing the environmental fate and the associated human exposure and the mode of toxic action to arrive at quantitative estimates of health impacts at the individual and the population risk levels alike at EU scale. As the estimated B-Pb levels fall below the range of observed biomonitoring data collected for pre-school children in 6 different EU countries, results presented in this paper are only a first approximation of the costs entailed in the health effects of exposure to lead and the potential benefits that may arise from MFTR measures inscribed in Commission policies. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Impacts of different types of measurements on estimating unsaturated flow parameters

    NASA Astrophysics Data System (ADS)

    Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru

    2015-05-01

    This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  14. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    NASA Astrophysics Data System (ADS)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  15. Missing Data Treatments at the Second Level of Hierarchical Linear Models

    ERIC Educational Resources Information Center

    St. Clair, Suzanne W.

    2011-01-01

    The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…

  16. Revised self-noise estimates for Güralp broadband seismometers concerning ambient noise levels of the UK mainland: implications for detectability of induced seismic events

    NASA Astrophysics Data System (ADS)

    Hicks, S. P.; Hill, P.; Goessen, S.; Rietbrock, A.; Garth, T.

    2016-12-01

    The self-noise level of a broadband seismometer sensor is a commonly-used parameter used to evaluate instrument performance. There are several independent studies of various instruments' self-noise (e.g. Ringler & Hutt, 2010; Tasič & Runovc, 2012). However, due to ongoing developments in instrument design (i.e. mechanics and electronics), it is essential to regularly assess any changes in self-noise, which could indicate improvements/deterioration in instrument design and performance over time. We present new self-noise estimates for a range of Güralp broadband seismometers (3T, 3ESPC, 40T, 6T). We use the three-channel coherence analysis estimate of Sleeman et al. (2006) to measure self-noise of these instruments. Based on coherency analysis, we also perform a mathematical rotation of measured waveforms to account for any relative sensor misalignment errors, which can cause artefacts of amplified self-noise around the microseismic peak (Tasič & Runovc, 2012). The instruments were tested for a period of several months at a seismic vault located at the Eskdalemuir array in southern Scotland. We discuss the implications of these self-noise estimates within the framework of the ambient noise level across the mainland United Kingdom. Using attenuation relationships derived for the United Kingdom, we investigate the detection capability thresholds of the UK National Seismic Network within the framework of a Traffic Light System (TLS) that has been proposed for monitoring of induced seismic events due to shale gas extraction.

  17. Sleep Deprivation Impairs and Caffeine Enhances My Performance, but Not Always Our Performance

    PubMed Central

    Faber, Nadira S.; Häusser, Jan A.; Kerr, Norbert L.

    2016-01-01

    What effects do factors that impair or enhance performance in individuals have when these individuals act in groups? We provide a framework, called the GIE ("Effects of Grouping on Impairments and Enhancements”) framework, for investigating this question. As prominent examples for individual-level impairments and enhancements, we discuss sleep deprivation and caffeine. Based on previous research, we derive hypotheses on how they influence performance in groups, specifically process gains and losses in motivation, individual capability, and coordination. We conclude that the effect an impairment or enhancement has on individual-level performance is not necessarily mirrored in group performance: grouping can help or hurt. We provide recommendations on how to estimate empirically the effects individual-level performance impairments and enhancements have in groups. By comparing sleep deprivation to stress and caffeine to pharmacological cognitive enhancement, we illustrate that we cannot readily generalize from group results on one impairment or enhancement to another, even if they have similar effects on individual-level performance. PMID:26468077

  18. Sleep Deprivation Impairs and Caffeine Enhances My Performance, but Not Always Our Performance.

    PubMed

    Faber, Nadira S; Häusser, Jan A; Kerr, Norbert L

    2017-02-01

    What effects do factors that impair or enhance performance in individuals have when these individuals act in groups? We provide a framework, called the GIE ("Effects of Grouping on Impairments and Enhancements") framework, for investigating this question. As prominent examples for individual-level impairments and enhancements, we discuss sleep deprivation and caffeine. Based on previous research, we derive hypotheses on how they influence performance in groups, specifically process gains and losses in motivation, individual capability, and coordination. We conclude that the effect an impairment or enhancement has on individual-level performance is not necessarily mirrored in group performance: grouping can help or hurt. We provide recommendations on how to estimate empirically the effects individual-level performance impairments and enhancements have in groups. By comparing sleep deprivation to stress and caffeine to pharmacological cognitive enhancement, we illustrate that we cannot readily generalize from group results on one impairment or enhancement to another, even if they have similar effects on individual-level performance.

  19. A Multilevel Investigation of the Association between School Context and Adolescent Nonphysical Bullying

    PubMed Central

    GREEN, JENNIFER GREIF; DUNN, ERIN C.; JOHNSON, RENEE M.; MOLNAR, BETH E.

    2011-01-01

    Although researchers have identified individual-level predictors of nonphysical bullying among children and youth, school-level predictors (i.e., characteristics of the school environment that influence bullying exposure) remain largely unstudied. Using data from a survey of 1,838 students in 21 Boston public high schools, we used multilevel modeling techniques to estimate the level of variation across schools in student reports of nonphysical bully victimization and identify school-level predictors of bullying. We found significant between school variation in youth reports of nonphysical bullying, with estimates ranging from 25–58%. We tested school-level indicators of academic performance, emotional well-being, and school safety. After controlling for individual-level covariates and demographic controls, the percent of students in the school who met with a mental health counselor was significantly associated with bullying (OR = 1.03, 95% CI = 1.01, 1.06). There was no significant association between school-level academic performance and perceptions of school safety on individual reports of bullying. Findings suggest that prevention and intervention programs may benefit from attending to the emotional well-being of students and support the importance of understanding the role of the school environment in shaping student experiences with bullying. PMID:21532943

  20. Quantitative Gait Measurement With Pulse-Doppler Radar for Passive In-Home Gait Assessment

    PubMed Central

    Skubic, Marjorie; Rantz, Marilyn; Cuddihy, Paul E.

    2014-01-01

    In this paper, we propose a pulse-Doppler radar system for in-home gait assessment of older adults. A methodology has been developed to extract gait parameters including walking speed and step time using Doppler radar. The gait parameters have been validated with a Vicon motion capture system in the lab with 13 participants and 158 test runs. The study revealed that for an optimal step recognition and walking speed estimation, a dual radar set up with one radar placed at foot level and the other at torso level is necessary. An excellent absolute agreement with intraclass correlation coefficients of 0.97 was found for step time estimation with the foot level radar. For walking speed, although both radars show excellent consistency they all have a system offset compared to the ground truth due to walking direction with respect to the radar beam. The torso level radar has a better performance (9% offset on average) in the speed estimation compared to the foot level radar (13%–18% offset). Quantitative analysis has been performed to compute the angles causing the systematic error. These lab results demonstrate the capability of the system to be used as a daily gait assessment tool in home environments, useful for fall risk assessment and other health care applications. The system is currently being tested in an unstructured home environment. PMID:24771566

  1. Quantitative gait measurement with pulse-Doppler radar for passive in-home gait assessment.

    PubMed

    Wang, Fang; Skubic, Marjorie; Rantz, Marilyn; Cuddihy, Paul E

    2014-09-01

    In this paper, we propose a pulse-Doppler radar system for in-home gait assessment of older adults. A methodology has been developed to extract gait parameters including walking speed and step time using Doppler radar. The gait parameters have been validated with a Vicon motion capture system in the lab with 13 participants and 158 test runs. The study revealed that for an optimal step recognition and walking speed estimation, a dual radar set up with one radar placed at foot level and the other at torso level is necessary. An excellent absolute agreement with intraclass correlation coefficients of 0.97 was found for step time estimation with the foot level radar. For walking speed, although both radars show excellent consistency they all have a system offset compared to the ground truth due to walking direction with respect to the radar beam. The torso level radar has a better performance (9% offset on average) in the speed estimation compared to the foot level radar (13%-18% offset). Quantitative analysis has been performed to compute the angles causing the systematic error. These lab results demonstrate the capability of the system to be used as a daily gait assessment tool in home environments, useful for fall risk assessment and other health care applications. The system is currently being tested in an unstructured home environment.

  2. Missing the Boat--Impact of Just Missing Identification as a High-Performing School

    ERIC Educational Resources Information Center

    Weiner, Jennie; Donaldson, Morgaen; Dougherty, Shaun M.

    2017-01-01

    This study capitalizes on the performance identification system under the No Child Left Behind waivers to estimate the school-level impact of just missing formal state recognition as a high-performing school. Using a fuzzy regression-discontinuity design and data from the early years of waiver implementation in Rhode Island, we find that, when…

  3. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  4. Long-Term Stable Control of Motor-Imagery BCI by a Locked-In User Through Adaptive Assistance.

    PubMed

    Saeedi, Sareh; Chavarriaga, Ricardo; Millan, Jose Del R

    2017-04-01

    Performance variation is one of the main challenges that BCIs are confronted with, when being used over extended periods of time. Shared control techniques could partially cope with such a problem. In this paper, we propose a taxonomy of shared control approaches used for BCIs and we review some of the recent studies at the light of these approaches. We posit that the level of assistance provided to the BCI user should be adjusted in real time in order to enhance BCI reliability over time. This approach has not been extensively studied in the recent literature on BCIs. In addition, we investigate the effectiveness of providing online adaptive assistance in a motor-imagery BCI for a tetraplegic end-user with an incomplete locked-in syndrome in a longitudinal study lasting 11 months. First, we report a reliable estimation of the BCI performance (in terms of command delivery time) using only a window of 1 s in the beginning of trials (AUC ≈ 0.8 ). Second, we demonstrate how adaptive shared control can exploit the output of the performance estimator to adjust online the level of assistance in a BCI game by regulating its speed. In particular, online adaptive assistance was superior to a fixed condition in terms of success rate ( ). Remarkably, the results exhibited a stable performance over severalmonths without recalibration of the BCI classifier or the performance estimator.

  5. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  6. Design and performance of duct acoustic treatment

    NASA Technical Reports Server (NTRS)

    Motsinger, R. E.; Kraft, R. E.

    1991-01-01

    The procedure for designing acoustic treatment panels used to line the walls of aircraft engine ducts and for estimating the resulting suppression of turbofan engine duct noise is discussed. This procedure is intended to be used for estimating noise suppression of existing designs or for designing new acoustic treatment panels and duct configurations to achieve desired suppression levels.

  7. Parametric Analysis of Surveillance Quality and Level and Quality of Intent Information and Their Impact on Conflict Detection Performance

    NASA Technical Reports Server (NTRS)

    Guerreiro, Nelson M.; Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Lewis, Timothy A.

    2016-01-01

    A loss-of-separation (LOS) is said to occur when two aircraft are spatially too close to one another. A LOS is the fundamental unsafe event to be avoided in air traffic management and conflict detection (CD) is the function that attempts to predict these LOS events. In general, the effectiveness of conflict detection relates to the overall safety and performance of an air traffic management concept. An abstract, parametric analysis was conducted to investigate the impact of surveillance quality, level of intent information, and quality of intent information on conflict detection performance. The data collected in this analysis can be used to estimate the conflict detection performance under alternative future scenarios or alternative allocations of the conflict detection function, based on the quality of the surveillance and intent information under those conditions.Alternatively, this data could also be used to estimate the surveillance and intent information quality required to achieve some desired CD performance as part of the design of a new separation assurance system.

  8. A Systematic Evaluation of Different Methods for Calculating Adolescent Vaccination Levels Using Immunization Information System Data

    PubMed Central

    Gowda, Charitha; Dong, Shiming; Potter, Rachel C.; Dombkowski, Kevin J.; Stokley, Shannon

    2013-01-01

    Objective Immunization information systems (IISs) are valuable surveillance tools; however, population relocation may introduce bias when determining immunization coverage. We explored alternative methods for estimating the vaccine-eligible population when calculating adolescent immunization levels using a statewide IIS. Methods We performed a retrospective analysis of the Michigan State Care Improvement Registry (MCIR) for all adolescents aged 11–18 years registered in the MCIR as of October 2010. We explored four methods for determining denominators: (1) including all adolescents with MCIR records, (2) excluding adolescents with out-of-state residence, (3) further excluding those without MCIR activity ≥10 years prior to the evaluation date, and (4) using a denominator based on U.S. Census data. We estimated state- and county-specific coverage levels for four adolescent vaccines. Results We found a 20% difference in estimated vaccination coverage between the most inclusive and restrictive denominator populations. Although there was some variability among the four methods in vaccination at the state level (2%–11%), greater variation occurred at the county level (up to 21%). This variation was substantial enough to potentially impact public health assessments of immunization programs. Generally, vaccines with higher coverage levels had greater absolute variation, as did counties with smaller populations. Conclusion At the county level, using the four denominator calculation methods resulted in substantial differences in estimated adolescent immunization rates that were less apparent when aggregated at the state level. Further research is needed to ascertain the most appropriate method for estimating vaccine coverage levels using IIS data. PMID:24179260

  9. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOEpatents

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  10. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  11. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  12. Improving causal inference with a doubly robust estimator that combines propensity score stratification and weighting.

    PubMed

    Linden, Ariel

    2017-08-01

    When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.

  13. [A school-level longitudinal study of clinical performance examination scores].

    PubMed

    Park, Jang Hee

    2015-06-01

    This school-level longitudinal study examined 7 years of clinical performance data to determine differences (effects) in students and annual changes within a school and between schools; examine how much their predictors (characteristics) influenced the variation in student performance; and calculate estimates of the schools' initial status and growth. A school-level longitudinal model was tested: level 1 (between students), level 2 (annual change within a school), and level 3 (between schools). The study sample comprised students who belonged to the CPX Consortium (n=5,283 for 2005~2008 and n=4,337 for 2009~2011). Despite a difference between evaluation domains, the performance outcomes were related to individual large-effect differences and small-effect school-level differences. Physical examination, clinical courtesy, and patient education were strongly influenced by the school effect, whereas patient-physician interaction was not affected much. Student scores are influenced by the school effect (differences), and the predictors explain the variation in differences, depending on the evaluation domain.

  14. Estimating brain connectivity when few data points are available: Perspectives and limitations.

    PubMed

    Antonacci, Yuri; Toppi, Jlenia; Caschera, Stefano; Anzolin, Alessandra; Mattia, Donatella; Astolfi, Laura

    2017-07-01

    Methods based on the use of multivariate autoregressive modeling (MVAR) have proved to be an accurate and flexible tool for the estimation of brain functional connectivity. The multivariate approach, however, implies the use of a model whose complexity (in terms of number of parameters) increases quadratically with the number of signals included in the problem. This can often lead to an underdetermined problem and to the condition of multicollinearity. The aim of this paper is to introduce and test an approach based on Ridge Regression combined with a modified version of the statistics usually adopted for these methods, to broaden the estimation of brain connectivity to those conditions in which current methods fail, due to the lack of enough data points. We tested the performances of this new approach, in comparison with the classical approach based on ordinary least squares (OLS), by means of a simulation study implementing different ground-truth networks, under different network sizes and different levels of data points. Simulation results showed that the new approach provides better performances, in terms of accuracy of the parameters estimation and false positives/false negatives rates, in all conditions related to a low data points/model dimension ratio, and may thus be exploited to estimate and validate estimated patterns at single-trial level or when short time data segments are available.

  15. A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.

    2006-01-01

    This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.

  16. Discharge estimation for the Upper Brahmaputra River in the Tibetan Plateau using multi-source remote sensing data

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Long, D.; Du, M.; Hong, Y.

    2017-12-01

    River discharge is among the most important hydrological variables of hydrologists' concern, as it links drinking water supply, irrigation, and flood forecast together. Despite its importance, there are extremely limited gauging stations across most of alpine regions such as the Tibetan Plateau (TP) known as Asia's water towers. Use of remote sensing combined with partial in situ discharge measurements is a promising way of retrieving river discharge over ungauged or poorly gauged basins. Successful discharge estimation depends largely on accurate water width (area) and water level, but it is challenging to obtain these variables for alpine regions from a single satellite platform due to narrow river channels, complex terrain, and limited observations. Here, we used high-spatial-resolution images from Landsat series to derive water area, and satellite altimetry (Jason 2) to derive water level for the Upper Brahmaputra River (UBR) in the TP with narrow river width (less than 400 m in most occasions). We performed waveform retracking using a 50% Threshold and Ice-1 Combined algorithm (TIC) developed in this study to obtain accurate water level measurements. The discharge was estimated well using a range of derived formulas including the power function between water level and discharge, and that between water area and discharge suitable for the triangular cross-section around the Nuxia gauging station in the UBR. Results showed that the power function using Jason 2-derived water levels after performing waveform retracking performed best, showing an overall NSE value of 0.92. The proposed approach for remotely sensed river discharge is effective in the UBR and possibly other alpine rivers globally.

  17. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  18. LACIE performance predictor final operational capability program description, volume 3

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.

  19. Research notes : developing corridor-level truck travel time estimates and other freight performance measures from archived ITS data.

    DOT National Transportation Integrated Search

    2010-02-01

    This research project demonstrated that it is feasible to use the WIM data to develop long-term corridor performance monitoring of truck travel. From the perspective of a realtime traveler information system, there are too many shortcomings mainl...

  20. Success in everyday physics: The role of personality and academic variables

    NASA Astrophysics Data System (ADS)

    Norvilitis, Jill M.; Reid, Howard M.; Norvilitis, Bret M.

    2002-05-01

    Two studies examined students' intuitive physics ability and characteristics associated with physics competence. In Study 1, although many students did well on a physics quiz, more than 25% of students performed below levels predicted by chance. Better performance on the physics quiz was related to physics grades, highest level of math taken, and students' perceived scholastic competence, but was not related to a number of other hypothesized personality variables. Study 2 further explored personality and academic variables and also examined students' awareness of their own physics ability. Results indicate that the personality variables were again unrelated to ability, but narcissism may be related to subjects' estimates of knowledge. Also, academic variables and how important students think it is to understand the physical world are related to both measured and estimated physics proficiency.

  1. Quantitative comparisons of three automated methods for estimating intracranial volume: A study of 270 longitudinal magnetic resonance images.

    PubMed

    Shang, Xiaoyan; Carlson, Michelle C; Tang, Xiaoying

    2018-04-30

    Total intracranial volume (TIV) is often used as a measure of brain size to correct for individual variability in magnetic resonance imaging (MRI) based morphometric studies. An adjustment of TIV can greatly increase the statistical power of brain morphometry methods. As such, an accurate and precise TIV estimation is of great importance in MRI studies. In this paper, we compared three automated TIV estimation methods (multi-atlas likelihood fusion (MALF), Statistical Parametric Mapping 8 (SPM8) and FreeSurfer (FS)) using longitudinal T1-weighted MR images in a cohort of 70 older participants at elevated sociodemographic risk for Alzheimer's disease. Statistical group comparisons in terms of four different metrics were performed. Furthermore, sex, education level, and intervention status were investigated separately for their impacts on the TIV estimation performance of each method. According to our experimental results, MALF was the least susceptible to atrophy, while SPM8 and FS suffered a loss in precision. In group-wise analysis, MALF was the least sensitive method to group variation, whereas SPM8 was particularly sensitive to sex and FS was unstable with respect to education level. In terms of effectiveness, both MALF and SPM8 delivered a user-friendly performance, while FS was relatively computationally intensive. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  3. Safety of telephone triage in general practitioner cooperatives: do triage nurses correctly estimate urgency?

    PubMed

    Giesen, Paul; Ferwerda, Rosa; Tijssen, Roelie; Mokkink, Henk; Drijver, Roeland; van den Bosch, Wil; Grol, Richard

    2007-06-01

    In recent years, there has been a growth in the use of triage nurses to decrease general practitioner (GP) workloads and increase the efficiency of telephone triage. The actual safety of decisions made by triage nurses has not yet been assessed. To investigate whether triage nurses accurately estimate the urgency level of health complaints when using the national telephone guidelines, and to examine the relationship between the performance of triage nurses and their education and training. A cross-sectional, multicentre, observational study employing five mystery (simulated) patients who telephoned triage nurses in four GP cooperatives. The mystery patients played standardised roles. Each role had one of four urgency levels as determined by experts. The triage nurses called were asked to estimate the level of urgency after the contact. This level of urgency was compared with a gold standard. Triage nurses estimated the level of urgency of 69% of the 352 contacts correctly and underestimated the level of urgency of 19% of the contacts. The sensitivity and specificity of the urgency estimates provided by the triage nurses were found to be 0.76 and 0.95, respectively. The positive and negative predictive values of the urgency estimates were 0.83 and 0.93, respectively. A significant correlation was found between correct estimation of urgency and specific training on the use of the guidelines. The educational background (primary or secondary care) of the nurses had no significant relationship with the rate of underestimation. Telephone triage by triage nurses is efficient but possibly not safe, with potentially severe consequences for the patient. An educational programme for triage nurses is recommended. Also, a direct second safety check of all cases by a specially trained GP telephone doctor is advisable.

  4. Safety of telephone triage in general practitioner cooperatives: do triage nurses correctly estimate urgency?

    PubMed Central

    Giesen, Paul; Ferwerda, Rosa; Tijssen, Roelie; Mokkink, Henk; Drijver, Roeland; van den Bosch, Wil; Grol, Richard

    2007-01-01

    Background In recent years, there has been a growth in the use of triage nurses to decrease general practitioner (GP) workloads and increase the efficiency of telephone triage. The actual safety of decisions made by triage nurses has not yet been assessed. Objectives To investigate whether triage nurses accurately estimate the urgency level of health complaints when using the national telephone guidelines, and to examine the relationship between the performance of triage nurses and their education and training. Method A cross‐sectional, multicentre, observational study employing five mystery (simulated) patients who telephoned triage nurses in four GP cooperatives. The mystery patients played standardised roles. Each role had one of four urgency levels as determined by experts. The triage nurses called were asked to estimate the level of urgency after the contact. This level of urgency was compared with a gold standard. Results Triage nurses estimated the level of urgency of 69% of the 352 contacts correctly and underestimated the level of urgency of 19% of the contacts. The sensitivity and specificity of the urgency estimates provided by the triage nurses were found to be 0.76 and 0.95, respectively. The positive and negative predictive values of the urgency estimates were 0.83 and 0.93, respectively. A significant correlation was found between correct estimation of urgency and specific training on the use of the guidelines. The educational background (primary or secondary care) of the nurses had no significant relationship with the rate of underestimation. Conclusion Telephone triage by triage nurses is efficient but possibly not safe, with potentially severe consequences for the patient. An educational programme for triage nurses is recommended. Also, a direct second safety check of all cases by a specially trained GP telephone doctor is advisable. PMID:17545343

  5. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  6. Addressing criticisms of existing predictive bias research: cognitive ability test scores still overpredict African Americans' job performance.

    PubMed

    Berry, Christopher M; Zhao, Peng

    2015-01-01

    Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.

  7. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  8. Performance of a large building rainwater harvesting system.

    PubMed

    Ward, S; Memon, F A; Butler, D

    2012-10-15

    Rainwater harvesting is increasingly becoming an integral part of the sustainable water management toolkit. Despite a plethora of studies modelling the feasibility of the utilisation of rainwater harvesting (RWH) systems in particular contexts, there remains a significant gap in knowledge in relation to detailed empirical assessments of performance. Domestic systems have been investigated to a limited degree in the literature, including in the UK, but there are few recent longitudinal studies of larger non-domestic systems. Additionally, there are few studies comparing estimated and actual performance. This paper presents the results of a longitudinal empirical performance assessment of a non-domestic RWH system located in an office building in the UK. Furthermore, it compares actual performance with the estimated performance based on two methods recommended by the British Standards Institute - the Intermediate (simple calculations) and Detailed (simulation-based) Approaches. Results highlight that the average measured water saving efficiency (amount of mains water saved) of the office-based RWH system was 87% across an 8-month period, due to the system being over-sized for the actual occupancy level. Consequently, a similar level of performance could have been achieved using a smaller-sized tank. Estimated cost savings resulted in capital payback periods of 11 and 6 years for the actual over-sized tank and the smaller optimised tank, respectively. However, more detailed cost data on maintenance and operation is required to perform whole life cost analyses. These findings indicate that office-scale RWH systems potentially offer significant water and cost savings. They also emphasise the importance of monitoring data and that a transition to the use of Detailed Approaches (particularly in the UK) is required to (a) minimise over-sizing of storage tanks and (b) build confidence in RWH system performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. An Optimal Dietary Zinc Level of Brown-Egg Laying Hens Fed a Corn-Soybean Meal Diet.

    PubMed

    Qin, Shizhen; Lu, Lin; Zhang, Xichun; Liao, Xiudong; Zhang, Liyang; Guo, Yanli; Luo, Xugang

    2017-06-01

    An experiment was conducted to estimate the optimal dietary zinc (Zn) level of brown-egg laying hens fed a corn-soybean meal diet from 20 to 40 weeks of age. A total of 120 20-week-old Beijing Red commercial laying hens were randomly allotted by bodyweight to one of five treatments with six replicates of four birds each in a completely randomized design, and fed a Zn-unsupplemented corn-soybean meal basal diet containing 27.95 mg Zn/kg by analysis and the basal diets supplemented with 30, 60, 90, or 120 mg Zn/kg as Zn sulfate (reagent grade ZnSO 4 ·7H 2 O) for a duration of 20 weeks. Laying performance, egg quality, tissue Zn concentrations, and activities of serum alkaline phosphatase (AKP), and liver copper-Zn superoxide dismutase (CuZnSOD) were measured. Regression analyses were performed to estimate an optimal dietary Zn level whenever a significant quadratic response (P < 0.05) was observed. Tibia Zn concentration (P = 0.002) and serum AKP activity (P = 0.010) showed significant quadratic responses to dietary supplemental Zn levels. The estimates of dietary Zn requirements for brown-egg laying hens from 20 to 40 weeks of age were 71.95 and 64.63 mg/kg for tibia Zn concentration and serum AKP activity, respectively. The results from this study indicate that the tibia Zn might be a more suitable and reliable parameter for Zn requirement estimation, and the optimal dietary Zn level would be about 72 mg/kg for brown-egg laying hens fed a corn-soybean meal diet from 20 to 40 weeks of age.

  10. Effect of Risk of Bias on the Effect Size of Meta-Analytic Estimates in Randomized Controlled Trials in Periodontology and Implant Dentistry

    PubMed Central

    Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang

    2015-01-01

    Background Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. Objective The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. Methods A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Results Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. Conclusion In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association. PMID:26422698

  11. SU-F-207-16: CT Protocols Optimization Using Model Observer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, H; Fan, J; Kupinski, M

    2015-06-15

    Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less

  12. Validation of the Child Premorbid Intelligence Estimate method to predict premorbid Wechsler Intelligence Scale for Children-Fourth Edition Full Scale IQ among children with brain injury.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H; Suarez, Mariann; Brickell, Tracey A

    2008-12-01

    Determination of neuropsychological impairment involves contrasting obtained performances with a comparison standard, which is often an estimate of premorbid IQ. M. R. Schoenberg, R. T. Lange, T. A. Brickell, and D. H. Saklofske (2007) proposed the Child Premorbid Intelligence Estimate (CPIE) to predict premorbid Full Scale IQ (FSIQ) using the Wechsler Intelligence Scale for Children-4th Edition (WISC-IV; Wechsler, 2003). The CPIE includes 12 algorithms to predict FSIQ, 1 using demographic variables and 11 algorithms combining WISC-IV subtest raw scores with demographic variables. The CPIE was applied to a sample of children with acquired traumatic brain injury (TBI sample; n = 40) and a healthy demographically matched sample (n = 40). Paired-samples t tests found estimated premorbid FSIQ differed from obtained FSIQ when applied to the TBI sample (ps .02). The demographic only algorithm performed well at a group level, but estimates were restricted in range. Algorithms combining single subtest scores with demographics performed adequately. Results support the clinical application of the CPIE algorithms. However, limitations to estimating individual premorbid ability, including statistical and developmental factors, must be considered. (c) 2008 APA, all rights reserved.

  13. The Performance of ML, GLS, and WLS Estimation in Structural Equation Modeling under Conditions of Misspecification and Nonnormality.

    ERIC Educational Resources Information Center

    Olsson, Ulf Henning; Foss, Tron; Troye, Sigurd V.; Howell, Roy D.

    2000-01-01

    Used simulation to demonstrate how the choice of estimation method affects indexes of fit and parameter bias for different sample sizes when nested models vary in terms of specification error and the data demonstrate different levels of kurtosis. Discusses results for maximum likelihood (ML), generalized least squares (GLS), and weighted least…

  14. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  15. Use of spectral analysis with iterative filter for voxelwise determination of regional rates of cerebral protein synthesis with L-[1-11C]leucine PET.

    PubMed

    Veronese, Mattia; Schmidt, Kathleen C; Smith, Carolyn Beebe; Bertoldo, Alessandra

    2012-06-01

    A spectral analysis approach was used to estimate kinetic parameters of the L-[1-(11)C]leucine positron emission tomography (PET) method and regional rates of cerebral protein synthesis (rCPS) on a voxel-by-voxel basis. Spectral analysis applies to both heterogeneous and homogeneous tissues; it does not require prior assumptions concerning number of tissue compartments. Parameters estimated with spectral analysis can be strongly affected by noise, but numerical filters improve estimation performance. Spectral analysis with iterative filter (SAIF) was originally developed to improve estimation of leucine kinetic parameters and rCPS in region-of-interest (ROI) data analyses. In the present study, we optimized SAIF for application at the voxel level. In measured L-[1-(11)C]leucine PET data, voxel-level SAIF parameter estimates averaged over all voxels within a ROI (mean voxel-SAIF) generally agreed well with corresponding estimates derived by applying the originally developed SAIF to ROI time-activity curves (ROI-SAIF). Region-of-interest-SAIF and mean voxel-SAIF estimates of rCPS were highly correlated. Simulations showed that mean voxel-SAIF rCPS estimates were less biased and less variable than ROI-SAIF estimates in the whole brain and cortex; biases were similar in white matter. We conclude that estimation of rCPS with SAIF is improved when the method is applied at voxel level than in ROI analysis.

  16. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  17. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  18. Computerized image analysis: estimation of breast density on mammograms

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    2000-06-01

    An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

  19. Electrical stimulation therapy for dysphagia: a follow-up survey of USA dysphagia practitioners.

    PubMed

    Barikroo, Ali; Carnaby, Giselle; Crary, Michael

    2017-12-01

    The aim of this study was to compare current application, practice patterns, clinical outcomes, and professional attitudes of dysphagia practitioners regarding electrical stimulation (e-stim) therapy with similar data obtained in 2005. A web-based survey was posted on the American Speech-Language-Hearing Association Special Interest Group 13 webpage for 1 month. A total of 271 survey responses were analyzed and descriptively compared with the archived responses from the 2005 survey. Results suggested that e-stim application increased by 47% among dysphagia practitioners over the last 10 years. The frequency of weekly e-stim therapy sessions decreased while the reported total number of treatment sessions increased between the two surveys. Advancement in oral diet was the most commonly reported improvement in both surveys. Overall, reported satisfaction levels of clinicians and patients regarding e-stim therapy decreased. Still, the majority of e-stim practitioners continue to recommend this treatment modality to other dysphagia practitioners. Results from the novel items in the current survey suggested that motor level e-stim (e.g. higher amplitude) is most commonly used during dysphagia therapy with no preferred electrode placement. Furthermore, the majority of clinicians reported high levels of self-confidence regarding their ability to perform e-stim. The results of this survey highlight ongoing changes in application, practice patterns, clinical outcomes, and professional attitudes associated with e-stim therapy among dysphagia practitioners.

  20. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  1. Impact of vaginal-rectal ultrasound examinations with covered and low-level disinfected transducers on infectious transmissions in france.

    PubMed

    Leroy, Sandrine; M'Zali, Fatima; Kann, Michael; Weber, David J; Smith, David D

    2014-12-01

    The risk of cross-infection from shared ultrasound probes in endorectal and vaginal ultrasonography due to low-level disinfection (LLD) is difficult to estimate because potential infections are also sexually transmitted diseases, and route of contamination is often difficult to establish. In France, the widely used standard for prevention of infections is through the use of probe covers and LLD of the ultrasound transducer by disinfectant wipes. We performed an in silico simulation based on a systematic review to estimate the number of patients infected after endorectal or vaginal ultrasonography examination using LLD for probes. We performed a stochastic Monte Carlo computer simulation to produce hypothetical cohorts for a population of 4 million annual ultrasound examinations performed in France, and we estimated the number of infected patients for human immunodeficiency virus (HIV), herpes simplex virus, hepatitis B virus, hepatitis C virus, human papilloma virus, cytomegalovirus, and Chlamydia trachomatis. Modeling parameters were estimated by meta-analysis when possible. The probability of infection from a contaminated probe ranged from 1% to 6%, depending on the pathogen. For cases of HIV infection, this would result in approximately 60 infected patients per year. For other common viral infections, the number of new cases ranged from 1,600 to 15,000 per year that could be attributable directly to ultrasound and LLD procedures. Our simulation results showed that, despite cumulative use of probe cover and LLD, there were still some cases of de novo infection that may be attributable to ultrasound procedures. These cases are preventable by reviewing the currently used LLD and/or upgrading LLD to high-level disinfection, as recommended by the US Centers for Disease Control and Prevention.

  2. Orbit transfer rocket engine integrated control and health monitoring system technology readiness assessment

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.; Collamore, F. N.; Gage, M. L.; Morgan, D. B.; Thomas, E. R.

    1992-01-01

    The objectives of this task were to: (1) estimate the technology readiness of an integrated control and health monitoring (ICHM) system for the Aerojet 7500 lbF Orbit Transfer Vehicle engine preliminary design assuming space based operations; and (2) estimate the remaining cost to advance this technology to a NASA defined 'readiness level 6' by 1996 wherein the technology has been demonstrated with a system validation model in a simulated environment. The work was accomplished through the conduct of four subtasks. In subtask 1 the minimally required functions for the control and monitoring system was specified. The elements required to perform these functions were specified in Subtask 2. In Subtask 3, the technology readiness level of each element was assessed. Finally, in Subtask 4, the development cost and schedule requirements were estimated for bringing each element to 'readiness level 6'.

  3. Radar stage uncertainty

    USGS Publications Warehouse

    Fulford, J.M.; Davies, W.J.

    2005-01-01

    The U.S. Geological Survey is investigating the performance of radars used for stage (or water-level) measurement. This paper presents a comparison of estimated uncertainties and data for radar water-level measurements with float, bubbler, and wire weight water-level measurements. The radar sensor was also temperature-tested in a laboratory. The uncertainty estimates indicate that radar measurements are more accurate than uncorrected pressure sensors at higher water stages, but are less accurate than pressure sensors at low stages. Field data at two sites indicate that radar sensors may have a small negative bias. Comparison of field radar measurements with wire weight measurements found that the radar tends to measure slightly lower values as stage increases. Copyright ASCE 2005.

  4. Validation of Lower Tier Exposure Tools Used for REACH: Comparison of Tools Estimates With Available Exposure Measurements.

    PubMed

    van Tongeren, Martie; Lamb, Judith; Cherrie, John W; MacCalman, Laura; Basinas, Ioannis; Hesse, Susanne

    2017-10-01

    Tier 1 exposure tools recommended for use under REACH are designed to easily identify situations that may pose a risk to health through conservative exposure predictions. However, no comprehensive evaluation of the performance of the lower tier tools has previously been carried out. The ETEAM project aimed to evaluate several lower tier exposure tools (ECETOC TRA, MEASE, and EMKG-EXPO-TOOL) as well as one higher tier tool (STOFFENMANAGER®). This paper describes the results of the external validation of tool estimates using measurement data. Measurement data were collected from a range of providers, both in Europe and United States, together with contextual information. Individual measurement and aggregated measurement data were obtained. The contextual information was coded into the tools to obtain exposure estimates. Results were expressed as percentage of measurements exceeding the tool estimates and presented by exposure category (non-volatile liquid, volatile liquid, metal abrasion, metal processing, and powder handling). We also explored tool performance for different process activities as well as different scenario conditions and exposure levels. In total, results from nearly 4000 measurements were obtained, with the majority for the use of volatile liquids and powder handling. The comparisons of measurement results with tool estimates suggest that the tools are generally conservative. However, the tools were more conservative when estimating exposure from powder handling compared to volatile liquids and other exposure categories. In addition, results suggested that tool performance varies between process activities and scenario conditions. For example, tools were less conservative when estimating exposure during activities involving tabletting, compression, extrusion, pelletisation, granulation (common process activity PROC14) and transfer of substance or mixture (charging and discharging) at non-dedicated facilities (PROC8a; powder handling only). With the exception of STOFFENMANAGER® (for estimating exposure during powder handling), the tools were less conservative for scenarios with lower estimated exposure levels. This is the most comprehensive evaluation of the performance of REACH exposure tools carried out to date. The results show that, although generally conservative, the tools may not always achieve the performance specified in the REACH guidance, i.e. using the 75th or 90th percentile of the exposure distribution for the risk characterisation. Ongoing development, adjustment, and recalibration of the tools with new measurement data are essential to ensure adequate characterisation and control of worker exposure to hazardous substances. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  5. Intelligent quotient estimation of mental retarded people from different psychometric instruments using artificial neural networks.

    PubMed

    Di Nuovo, Alessandro G; Di Nuovo, Santo; Buono, Serafino

    2012-02-01

    The estimation of a person's intelligence quotient (IQ) by means of psychometric tests is indispensable in the application of psychological assessment to several fields. When complex tests as the Wechsler scales, which are the most commonly used and universally recognized parameter for the diagnosis of degrees of retardation, are not applicable, it is necessary to use other psycho-diagnostic tools more suited for the subject's specific condition. But to ensure a homogeneous diagnosis it is necessary to reach a common metric, thus, the aim of our work is to build models able to estimate accurately and reliably the Wechsler IQ, starting from different psycho-diagnostic tools. Four different psychometric tests (Leiter international performance scale; coloured progressive matrices test; the mental development scale; psycho educational profile), along with the Wechsler scale, were administered to a group of 40 mentally retarded subjects, with various pathologies, and control persons. The obtained database is used to evaluate Wechsler IQ estimation models starting from the scores obtained in the other tests. Five modelling methods, two statistical and three from machine learning, that belong to the family of artificial neural networks (ANNs) are employed to build the estimator. Several error metrics for estimated IQ and for retardation level classification are defined to compare the performance of the various models with univariate and multivariate analyses. Eight empirical studies show that, after ten-fold cross-validation, best average estimation error is of 3.37 IQ points and mental retardation level classification error of 7.5%. Furthermore our experiments prove the superior performance of ANN methods over statistical regression ones, because in all cases considered ANN models show the lowest estimation error (from 0.12 to 0.9 IQ points) and the lowest classification error (from 2.5% to 10%). Since the estimation performance is better than the confidence interval of Wechsler scales (five IQ points), we consider models built very accurate and reliable and they can be used into help clinical diagnosis. Therefore a computer software based on the results of our work is currently used in a clinical center and empirical trails confirm its validity. Furthermore positive results in our multivariate studies suggest new approaches for clinicians. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  7. Comparative evaluation of direct plating and most probable number for enumeration of low levels of Listeria monocytogenes in naturally contaminated ice cream products.

    PubMed

    Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru

    2017-01-16

    A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.

  8. Dynamic range of frontoparietal functional modulation is associated with working memory capacity limitations in older adults.

    PubMed

    Hakun, Jonathan G; Johnson, Nathan F

    2017-11-01

    Older adults tend to over-activate regions throughout frontoparietal cortices and exhibit a reduced range of functional modulation during WM task performance compared to younger adults. While recent evidence suggests that reduced functional modulation is associated with poorer task performance, it remains unclear whether reduced range of modulation is indicative of general WM capacity-limitations. In the current study, we examined whether the range of functional modulation observed over multiple levels of WM task difficulty (N-Back) predicts in-scanner task performance and out-of-scanner psychometric estimates of WM capacity. Within our sample (60-77years of age), age was negatively associated with frontoparietal modulation range. Individuals with greater modulation range exhibited more accurate N-Back performance. In addition, despite a lack of significant relationships between N-Back and complex span task performance, range of frontoparietal modulation during the N-Back significantly predicted domain-general estimates of WM capacity. Consistent with previous cross-sectional findings, older individuals with less modulation range exhibited greater activation at the lowest level of task difficulty but less activation at the highest levels of task difficulty. Our results are largely consistent with existing theories of neurocognitive aging (e.g. CRUNCH) but focus attention on dynamic range of functional modulation asa novel marker of WM capacity-limitations in older adults. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. On predicting monitoring system effectiveness

    NASA Astrophysics Data System (ADS)

    Cappello, Carlo; Sigurdardottir, Dorotea; Glisic, Branko; Zonta, Daniele; Pozzi, Matteo

    2015-03-01

    While the objective of structural design is to achieve stability with an appropriate level of reliability, the design of systems for structural health monitoring is performed to identify a configuration that enables acquisition of data with an appropriate level of accuracy in order to understand the performance of a structure or its condition state. However, a rational standardized approach for monitoring system design is not fully available. Hence, when engineers design a monitoring system, their approach is often heuristic with performance evaluation based on experience, rather than on quantitative analysis. In this contribution, we propose a probabilistic model for the estimation of monitoring system effectiveness based on information available in prior condition, i.e. before acquiring empirical data. The presented model is developed considering the analogy between structural design and monitoring system design. We assume that the effectiveness can be evaluated based on the prediction of the posterior variance or covariance matrix of the state parameters, which we assume to be defined in a continuous space. Since the empirical measurements are not available in prior condition, the estimation of the posterior variance or covariance matrix is performed considering the measurements as a stochastic variable. Moreover, the model takes into account the effects of nuisance parameters, which are stochastic parameters that affect the observations but cannot be estimated using monitoring data. Finally, we present an application of the proposed model to a real structure. The results show how the model enables engineers to predict whether a sensor configuration satisfies the required performance.

  10. Comparison of two recent models for estimating actual evapotranspiration using only regularly recorded data

    NASA Astrophysics Data System (ADS)

    Ali, M. F.; Mawdsley, J. A.

    1987-09-01

    An advection-aridity model for estimating actual evapotranspiration ET is tested with over 700 days of lysimeter evapotranspiration and meteorological data from barley, turf and rye-grass from three sites in the U.K. The performance of the model is also compared with the API model . It is observed from the test that the advection-aridity model overestimates nonpotential ET and tends to underestimate potential ET, but when tested with potential and nonpotential data together, the tendencies appear to cancel each other. On a daily basis the performance level of this model is found to be of the same order as the API model: correlation coefficients were obtained between the model estimates and lysimeter data of 0.62 and 0.68 respectively. For periods greater than one day, generally the performance of the models are improved. Proposed by Mawdsley and Ali (1979)

  11. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  12. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  13. Applicability of models to estimate traffic noise for urban roads.

    PubMed

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  14. Serum Iron Levels and the Risk of Parkinson Disease: A Mendelian Randomization Study

    PubMed Central

    Gögele, Martin; Lill, Christina M.; Bertram, Lars; Do, Chuong B.; Eriksson, Nicholas; Foroud, Tatiana; Myers, Richard H.; Nalls, Michael; Keller, Margaux F.; Benyamin, Beben; Whitfield, John B.; Pramstaller, Peter P.; Hicks, Andrew A.; Thompson, John R.; Minelli, Cosetta

    2013-01-01

    Background Although levels of iron are known to be increased in the brains of patients with Parkinson disease (PD), epidemiological evidence on a possible effect of iron blood levels on PD risk is inconclusive, with effects reported in opposite directions. Epidemiological studies suffer from problems of confounding and reverse causation, and mendelian randomization (MR) represents an alternative approach to provide unconfounded estimates of the effects of biomarkers on disease. We performed a MR study where genes known to modify iron levels were used as instruments to estimate the effect of iron on PD risk, based on estimates of the genetic effects on both iron and PD obtained from the largest sample meta-analyzed to date. Methods and Findings We used as instrumental variables three genetic variants influencing iron levels, HFE rs1800562, HFE rs1799945, and TMPRSS6 rs855791. Estimates of their effect on serum iron were based on a recent genome-wide meta-analysis of 21,567 individuals, while estimates of their effect on PD risk were obtained through meta-analysis of genome-wide and candidate gene studies with 20,809 PD cases and 88,892 controls. Separate MR estimates of the effect of iron on PD were obtained for each variant and pooled by meta-analysis. We investigated heterogeneity across the three estimates as an indication of possible pleiotropy and found no evidence of it. The combined MR estimate showed a statistically significant protective effect of iron, with a relative risk reduction for PD of 3% (95% CI 1%–6%; p = 0.001) per 10 µg/dl increase in serum iron. Conclusions Our study suggests that increased iron levels are causally associated with a decreased risk of developing PD. Further studies are needed to understand the pathophysiological mechanism of action of serum iron on PD risk before recommendations can be made. Please see later in the article for the Editors' Summary PMID:23750121

  15. Power optimization of digital baseband WCDMA receiver components on algorithmic and architectural level

    NASA Astrophysics Data System (ADS)

    Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.

    2008-05-01

    High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.

  16. A Remote Sensing-Derived Corn Yield Assessment Model

    NASA Astrophysics Data System (ADS)

    Shrestha, Ranjay Man

    Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.

  17. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  18. Fingerstroke time estimates for touchscreen-based mobile gaming interaction.

    PubMed

    Lee, Ahreum; Song, Kiburm; Ryu, Hokyoung Blake; Kim, Jieun; Kwon, Gyuhyun

    2015-12-01

    The growing popularity of gaming applications and ever-faster mobile carrier networks have called attention to an intriguing issue that is closely related to command input performance. A challenging mirroring game service, which simultaneously provides game service to both PC and mobile phone users, allows them to play games against each other with very different control interfaces. Thus, for efficient mobile game design, it is essential to apply a new predictive model for measuring how potential touch input compares to the PC interfaces. The present study empirically tests the keystroke-level model (KLM) for predicting the time performance of basic interaction controls on the touch-sensitive smartphone interface (i.e., tapping, pointing, dragging, and flicking). A modified KLM, tentatively called the fingerstroke-level model (FLM), is proposed using time estimates on regression models. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Psychophysiological monitoring of operator's emotional stress in aviation and astronautics.

    PubMed

    Simonov, P V; Frolov, M V; Ivanov, E A

    1980-01-01

    The level of emotional stress depending on the power of motivation and the estimation by the subject of the probability (possibility) of goal achievement, largely influences the operator's skill performance (that of a pilot, controller, astronaut). A decrease in the emotional tonus leads to drowsiness, lack of vigilance, missing of significant signals, and to slower reactions. The extremely high stress level disorganizes the activity, complicates it with a trend toward untimely acts and reactions to the insignificant signals (false alarms). The best methods to monitor the degree of the operator's emotional state during his skill performance are the integral estimation of the changes in heart-rate and T-peak amplitude, as well as the analysis of spectral and intonational characteristics of the human voice during radio conversation. These methods were tested on paratroopers, pilots in civil aviation, and airport controllers.

  20. On the relation between feeling of knowing and lexical decision: persistent subthreshold activation or topic familiarity?

    PubMed

    Connor, L T; Balota, D A; Neely, J H

    1992-05-01

    Experiment 1 replicated Yaniv and Meyer's (1987) finding that lexical decision and episodic recognition performance was better for words previously yielding high-accessibility levels (a combination of feeling-of-knowing and tip-of-the-tongue ratings) in comparison with those yielding low-accessibility levels in a rare word definition task. Experiment 2 yielded the same pattern even though lexical decisions preceded accessibility estimates by a full week. Experiment 3 dismissed the possibility that the Experiment 2 results may have been due to a long-term influence from the lexical decision task to the rare word judgment task. These results support a model in which Ss (a) retrieve topic familiarity information in making accessibility estimates in the rare word definition task and (b) use this information to modulate lexical decision performance.

  1. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  2. Benchmarking health system performance across districts in Zambia: a systematic analysis of levels and trends in key maternal and child health interventions from 1990 to 2010.

    PubMed

    Colson, Katherine Ellicott; Dwyer-Lindgren, Laura; Achoki, Tom; Fullman, Nancy; Schneider, Matthew; Mulenga, Peter; Hangoma, Peter; Ng, Marie; Masiye, Felix; Gakidou, Emmanuela

    2015-04-02

    Achieving universal health coverage and reducing health inequalities are primary goals for an increasing number of health systems worldwide. Timely and accurate measurements of levels and trends in key health indicators at local levels are crucial to assess progress and identify drivers of success and areas that may be lagging behind. We generated estimates of 17 key maternal and child health indicators for Zambia's 72 districts from 1990 to 2010 using surveys, censuses, and administrative data. We used a three-step statistical model involving spatial-temporal smoothing and Gaussian process regression. We generated estimates at the national level for each indicator by calculating the population-weighted mean of the district values and calculated composite coverage as the average of 10 priority interventions. National estimates masked substantial variation across districts in the levels and trends of all indicators. Overall, composite coverage increased from 46% in 1990 to 73% in 2010, and most of this gain was attributable to the scale-up of malaria control interventions, pentavalent immunization, and exclusive breastfeeding. The scale-up of these interventions was relatively equitable across districts. In contrast, progress in routine services, including polio immunization, antenatal care, and skilled birth attendance, stagnated or declined and exhibited large disparities across districts. The absolute difference in composite coverage between the highest-performing and lowest-performing districts declined from 37 to 26 percentage points between 1990 and 2010, although considerable variation in composite coverage across districts persisted. Zambia has made marked progress in delivering maternal and child health interventions between 1990 and 2010; nevertheless, substantial variations across districts and interventions remained. Subnational benchmarking is important to identify these disparities, allowing policymakers to prioritize areas of greatest need. Analyses such as this one should be conducted regularly and feed directly into policy decisions in order to increase accountability at the local, regional, and national levels.

  3. Level 1 environmental assessment performance evaluation. Final report jun 77-oct 78

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, E.D.; Smith, F.; Wagoner, D.E.

    1979-02-01

    The report gives results of a two-phased evaluation of Level 1 environmental assessment procedures. Results from Phase I, a field evaluation of the Source Assessment Sampling System (SASS), showed that the SASS train performed well within the desired factor of 3 Level 1 accuracy limit. Three sample runs were made with two SASS trains sampling simultaneously and from approximately the same sampling point in a horizontal duct. A Method-5 train was used to estimate the 'true' particulate loading. The sampling systems were upstream of the control devices to ensure collection of sufficient material for comparison of total particulate, particle sizemore » distribution, organic classes, and trace elements. Phase II consisted of providing each of three organizations with three types of control samples to challenge the spectrum of Level 1 analytical procedures: an artificial sample in methylene chloride, an artificial sample on a flyash matrix, and a real sample composed of the combined XAD-2 resin extracts from all Phase I runs. Phase II results showed that when the Level 1 analytical procedures are carefully applied, data of acceptable accuracy is obtained. Estimates of intralaboratory and interlaboratory precision are made.« less

  4. Estimating the concentration of gold nanoparticles incorporated on natural rubber membranes using multi-level starlet optimal segmentation

    NASA Astrophysics Data System (ADS)

    de Siqueira, A. F.; Cabrera, F. C.; Pagamisse, A.; Job, A. E.

    2014-12-01

    This study consolidates multi-level starlet segmentation (MLSS) and multi-level starlet optimal segmentation (MLSOS) techniques for photomicrograph segmentation, based on starlet wavelet detail levels to separate areas of interest in an input image. Several segmentation levels can be obtained using MLSS; after that, Matthews correlation coefficient is used to choose an optimal segmentation level, giving rise to MLSOS. In this paper, MLSOS is employed to estimate the concentration of gold nanoparticles with diameter around 47 nm, reduced on natural rubber membranes. These samples were used for the construction of SERS/SERRS substrates and in the study of the influence of natural rubber membranes with incorporated gold nanoparticles on the physiology of Leishmania braziliensis. Precision, recall, and accuracy are used to evaluate the segmentation performance, and MLSOS presents an accuracy greater than 88 % for this application.

  5. The challenge of measuring emergency preparedness: integrating component metrics to build system-level measures for strategic national stockpile operations.

    PubMed

    Jackson, Brian A; Faith, Kay Sullivan

    2013-02-01

    Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.

  6. CrowdWater - Can people observe what models need?

    NASA Astrophysics Data System (ADS)

    van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.

    2017-12-01

    CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain a simple runoff model and to generate simulated streamflow time series from the level observations.

  7. The value of volume and growth measurements in timber sales management of the National Forests

    NASA Technical Reports Server (NTRS)

    Lietzke, K. R.

    1977-01-01

    This paper summarizes work performed in the estimation of gross social value of timber volume and growth rate information used in making regional harvest decisions in the National Forest System. A model was developed to permit parametric analysis. The problem is formulated as one of finding optimal inventory holding patterns. Public timber management differs from other inventory holding problems in that the inventory, itself, generates value over time in providing recreational, aesthetic and environmental goods. 'Nontimber' demand estimates are inferred from past Forest Service harvest and sales levels. The solution requires a description of the harvest rates which maintain the optimum inventory level. Gross benefits of the Landsat systems are estimated by comparison with Forest Service information gathering models. Gross annual benefits are estimated to be $5.9 million for the MSS system and $7.2 million for the TM system.

  8. Optimal regionalization of extreme value distributions for flood estimation

    NASA Astrophysics Data System (ADS)

    Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.

    2018-01-01

    Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.

  9. Can Dyscalculics Estimate the Results of Arithmetic Problems?

    PubMed

    Ganor-Stern, Dana

    2017-01-01

    The present study is the first to examine the computation estimation skills of dyscalculics versus controls using the estimation comparison task. In this task, participants judged whether an estimated answer to a multidigit multiplication problem was larger or smaller than a given reference number. While dyscalculics were less accurate than controls, their performance was well above chance level. The performance of controls but not of those with developmental dyscalculia (DD) improved consistently for smaller problem sizes. The performance of both groups was superior when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, both of which are considered to be the markers of the approximate number system (ANS). Strategy analysis distinguished between an approximated calculation strategy and a sense of magnitude strategy, which does not involve any calculation but relies entirely on the ANS. Dyscalculics used the latter more often than controls. The present results suggest that there is little, if any, impairment in the ANS of adults with DD and that their main deficiency is with performing operations on magnitudes rather than with the representations of the magnitudes themselves. © Hammill Institute on Disabilities 2015.

  10. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    PubMed

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  11. Discovering graphical Granger causality using the truncating lasso penalty

    PubMed Central

    Shojaie, Ali; Michailidis, George

    2010-01-01

    Motivation: Components of biological systems interact with each other in order to carry out vital cell functions. Such information can be used to improve estimation and inference, and to obtain better insights into the underlying cellular mechanisms. Discovering regulatory interactions among genes is therefore an important problem in systems biology. Whole-genome expression data over time provides an opportunity to determine how the expression levels of genes are affected by changes in transcription levels of other genes, and can therefore be used to discover regulatory interactions among genes. Results: In this article, we propose a novel penalization method, called truncating lasso, for estimation of causal relationships from time-course gene expression data. The proposed penalty can correctly determine the order of the underlying time series, and improves the performance of the lasso-type estimators. Moreover, the resulting estimate provides information on the time lag between activation of transcription factors and their effects on regulated genes. We provide an efficient algorithm for estimation of model parameters, and show that the proposed method can consistently discover causal relationships in the large p, small n setting. The performance of the proposed model is evaluated favorably in simulated, as well as real, data examples. Availability: The proposed truncating lasso method is implemented in the R-package ‘grangerTlasso’ and is freely available at http://www.stat.lsa.umich.edu/∼shojaie/ Contact: shojaie@umich.edu PMID:20823316

  12. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors.

    PubMed

    Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L

    2010-04-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.

  13. Spatial Interpolation of Rain-field Dynamic Time-Space Evolution in Hong Kong

    NASA Astrophysics Data System (ADS)

    Liu, P.; Tung, Y. K.

    2017-12-01

    Accurate and reliable measurement and prediction of spatial and temporal distribution of rain-field over a wide range of scales are important topics in hydrologic investigations. In this study, geostatistical treatment of precipitation field is adopted. To estimate the rainfall intensity over a study domain with the sample values and the spatial structure from the radar data, the cumulative distribution functions (CDFs) at all unsampled locations were estimated. Indicator Kriging (IK) was used to estimate the exceedance probabilities for different pre-selected cutoff levels and a procedure was implemented for interpolating CDF values between the thresholds that were derived from the IK. Different interpolation schemes of the CDF were proposed and their influences on the performance were also investigated. The performance measures and visual comparison between the observed rain-field and the IK-based estimation suggested that the proposed method can provide fine results of estimation of indicator variables and is capable of producing realistic image.

  14. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  15. PoMo: An Allele Frequency-Based Approach for Species Tree Estimation

    PubMed Central

    De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin

    2015-01-01

    Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413

  16. Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.

    PubMed

    Yao, Yu; Zhao, Junhui; Wu, Lenan

    2018-05-29

    A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.

  17. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  18. The economic impact of revision otologic surgery.

    PubMed

    Nadimi, Sahar; Leonetti, John P; Pontikis, George

    2016-03-01

    Revision otologic surgery places a significant economic burden on patients and the healthcare system. We conducted a retrospective chart analysis to estimate the economic impact of revision canal-wall-down (CWD) mastoidectomy. We reviewed the medical records of all 189 adults who had undergone CWD mastoidectomy performed by the senior author between June 2006 and August 2011 at Loyola University Medical Center in Maywood, Ill. Institutional charges and collections for all patients were extrapolated to estimate the overall healthcare cost of revision surgery in Illinois and at the national level. Of the 189 CWD mastoidectomies, 89 were primary and 100 were revision procedures. The total charge for the revision cases was $2,783,700, and the net reimbursement (collections) was $846,289 (30.4%). Using Illinois Hospital Association data, we estimated that reimbursement for 387 revision CWD mastoidectomies that had been performed in fiscal year 2011 was nearly $3.3 million. By extrapolating our data to the national level, we estimated that 9,214 patients underwent revision CWD mastoidectomy in the United States during 2011, which cost the national healthcare system roughly $76 million, not including lost wages and productivity. Known causes of failed CWD mastoidectomies that often result in revision surgery include an inadequate meatoplasty, a facial ridge that is too high, residual diseased air cells, and recurrent cholesteatoma. A better understanding of these factors can reduce the need for revision surgery, which could have a positive impact on the economic strain related to this procedure at the local, state, and national levels.

  19. BAYESIAN LARGE-SCALE MULTIPLE REGRESSION WITH SUMMARY STATISTICS FROM GENOME-WIDE ASSOCIATION STUDIES1

    PubMed Central

    Zhu, Xiang; Stephens, Matthew

    2017-01-01

    Bayesian methods for large-scale multiple regression provide attractive approaches to the analysis of genome-wide association studies (GWAS). For example, they can estimate heritability of complex traits, allowing for both polygenic and sparse models; and by incorporating external genomic data into the priors, they can increase power and yield new biological insights. However, these methods require access to individual genotypes and phenotypes, which are often not easily available. Here we provide a framework for performing these analyses without individual-level data. Specifically, we introduce a “Regression with Summary Statistics” (RSS) likelihood, which relates the multiple regression coefficients to univariate regression results that are often easily available. The RSS likelihood requires estimates of correlations among covariates (SNPs), which also can be obtained from public databases. We perform Bayesian multiple regression analysis by combining the RSS likelihood with previously proposed prior distributions, sampling posteriors by Markov chain Monte Carlo. In a wide range of simulations RSS performs similarly to analyses using the individual data, both for estimating heritability and detecting associations. We apply RSS to a GWAS of human height that contains 253,288 individuals typed at 1.06 million SNPs, for which analyses of individual-level data are practically impossible. Estimates of heritability (52%) are consistent with, but more precise, than previous results using subsets of these data. We also identify many previously unreported loci that show evidence for association with height in our analyses. Software is available at https://github.com/stephenslab/rss. PMID:29399241

  20. High Stability Engine Control (HISTEC): Flight Demonstration Results

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Southwick, Robert D.; Gallops, George W.; Orme, John S.

    1998-01-01

    Future aircraft turbine engines, both commercial and military, must be able to accommodate expected increased levels of steady-state and dynamic engine-face distortion. The current approach of incorporating sufficient design stall margin to tolerate these increased levels of distortion would significantly reduce performance. The High Stability Engine Control (HISTEC) program has developed technologies for an advanced, integrated engine control system that uses measurement- based estimates of distortion to enhance engine stability. The resulting distortion tolerant control reduces the required design stall margin, with a corresponding increase in performance and/or decrease in fuel burn. The HISTEC concept was successfully flight demonstrated on the F-15 ACTIVE aircraft during the summer of 1997. The flight demonstration was planned and carried out in two parts, the first to show distortion estimation, and the second to show distortion accommodation. Post-flight analysis shows that the HISTEC technologies are able to successfully estimate and accommodate distortion, transiently setting the stall margin requirement on-line and in real-time. Flight demonstration of the HISTEC technologies has significantly reduced the risk of transitioning the technology to tactical and commercial engines.

  1. Using in vitro/in silico data for consumer safety assessment of feed flavoring additives--A feasibility study using piperine.

    PubMed

    Thiel, A; Etheve, S; Fabian, E; Leeman, W R; Plautz, J R

    2015-10-01

    Consumer health risk assessment for feed additives is based on the estimated human exposure to the additive that may occur in livestock edible tissues compared to its hazard. We present an approach using alternative methods for consumer health risk assessment. The aim was to use the fewest possible number of animals to estimate its hazard and human exposure without jeopardizing the safety upon use. As an example we selected the feed flavoring substance piperine and applied in silico modeling for residue estimation, results from literature surveys, and Read-Across to assess metabolism in different species. Results were compared to experimental in vitro metabolism data in rat and chicken, and to quantitative analysis of residues' levels from the in vivo situation in livestock. In silico residue modeling showed to be a worst case: the modeled residual levels were considerably higher than the measured residual levels. The in vitro evaluation of livestock versus rodent metabolism revealed no major differences in metabolism between the species. We successfully performed a consumer health risk assessment without performing additional animal experiments. As shown, the use and combination of different alternative methods supports animal welfare consideration and provides future perspective to reducing the number of animals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Quantification of gait changes in subjects with visual height intolerance when exposed to heights.

    PubMed

    Schniepp, Roman; Kugler, Günter; Wuehr, Max; Eckl, Maria; Huppert, Doreen; Huth, Sabrina; Pradhan, Cauchy; Jahn, Klaus; Brandt, Thomas

    2014-01-01

    Visual height intolerance (vHI) manifests as instability at heights with apprehension of losing balance or falling. We investigated contributions of visual feedback and attention on gait performance of subjects with vHI. Sixteen subjects with vHI walked over a gait mat (GAITRite®) on a 15-m-high balcony and at ground-level. Subjects walked at different speeds (slow, preferred, fast), during changes of the visual input (gaze straight/up/down; eyes open/closed), and while doing a cognitive task. An rmANOVA with the factors "height situation" and "gait condition" was performed. Subjects were also asked to estimate the height of the balcony over ground level. The individual estimates were used for correlations with the gait parameters. Study participants walked slower at heights, with reduced cadence and stride length. The double support phases were increased (all p < 0.01), which correlated with the estimated height of the balcony (R (2) = 0.453, p < 0.05). These changes were still present when walking with upward gaze or closure of the eyes. Under the conditions walking and looking down to the floor of the balcony, during dual-task and fast walking, there were no differences between the gait performance on the balcony and at ground-level. The found gait changes are features of a cautious gait control. Internal, cognitive models with anxiety play an important role for vHI; gait was similarly affected when the visual perception of the depth was prevented. Improvement by dual task at heights may be associated by a reduction of the anxiety level. It is conceivable that mental distraction by dual task or increasing the walking speed might be useful recommendations to reduce the imbalance during locomotion in subjects susceptible to vHI.

  3. The assessment of biases in the acoustic discrimination of individuals

    PubMed Central

    Šálek, Martin

    2017-01-01

    Animal vocalizations contain information about individual identity that could potentially be used for the monitoring of individuals. However, the performance of individual discrimination is subjected to many biases depending on factors such as the amount of identity information, or methods used. These factors need to be taken into account when comparing results of different studies or selecting the most cost-effective solution for a particular species. In this study, we evaluate several biases associated with the discrimination of individuals. On a large sample of little owl male individuals, we assess how discrimination performance changes with methods of call description, an increasing number of individuals, and number of calls per male. Also, we test whether the discrimination performance within the whole population can be reliably estimated from a subsample of individuals in a pre-screening study. Assessment of discrimination performance at the level of the individual and at the level of call led to different conclusions. Hence, studies interested in individual discrimination should optimize methods at the level of individuals. The description of calls by their frequency modulation leads to the best discrimination performance. In agreement with our expectations, discrimination performance decreased with population size. Increasing the number of calls per individual linearly increased the discrimination of individuals (but not the discrimination of calls), likely because it allows distinction between individuals with very similar calls. The available pre-screening index does not allow precise estimation of the population size that could be reliably monitored. Overall, projects applying acoustic monitoring at the individual level in population need to consider limitations regarding the population size that can be reliably monitored and fine-tune their methods according to their needs and limitations. PMID:28486488

  4. Mass properties survey of solar array technologies

    NASA Technical Reports Server (NTRS)

    Kraus, Robert

    1991-01-01

    An overview of the technologies, electrical performance, and mass characteristics of many of the presently available and the more advanced developmental space solar array technologies is presented. Qualitative trends and quantitative mass estimates as total array output power is increased from 1 kW to 5 kW at End of Life (EOL) from a single wing are shown. The array technologies are part of a database supporting an ongoing solar power subsystem model development for top level subsystem and technology analyses. The model is used to estimate the overall electrical and thermal performance of the complete subsystem, and then calculate the mass and volume of the array, batteries, power management, and thermal control elements as an initial sizing. The array types considered here include planar rigid panel designs, flexible and rigid fold-out planar arrays, and two concentrator designs, one with one critical axis and the other with two critical axes. Solar cell technologies of Si, GaAs, and InP were included in the analyses. Comparisons were made at the array level; hinges, booms, harnesses, support structures, power transfer, and launch retention mountings were included. It is important to note that the results presented are approximations, and in some cases revised or modified performance and mass estimates of specific designs.

  5. Background noise spectra of global seismic stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wada, M.M.; Claassen, J.P.

    1996-08-01

    Over an extended period of time station noise spectra were collected from various sources for use in estimating the detection and location performance of global networks of seismic stations. As the database of noise spectra enlarged and duplicate entries became available, an effort was mounted to more carefully select station noise spectra while discarding others. This report discusses the methodology and criteria by which the noise spectra were selected. It also identifies and illustrates the station noise spectra which survived the selection process and which currently contribute to the modeling efforts. The resulting catalog of noise statistics not only benefitsmore » those who model network performance but also those who wish to select stations on the basis of their noise level as may occur in designing networks or in selecting seismological data for analysis on the basis of station noise level. In view of the various ways by which station noise were estimated by the different contributors, it is advisable that future efforts which predict network performance have available station noise data and spectral estimation methods which are compatible with the statistics underlying seismic noise. This appropriately requires (1) averaging noise over seasonal and/or diurnal cycles, (2) averaging noise over time intervals comparable to those employed by actual detectors, and (3) using logarithmic measures of the noise.« less

  6. Speech segregation based-on binaural cue: interaural time difference (itd) and interaural level difference (ild)

    NASA Astrophysics Data System (ADS)

    Nur Farid, Mifta; Arifianto, Dhany

    2016-11-01

    A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.

  7. The High Stability Engine Control (HISTEC) Program: Flight Demonstration Phase

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Southwick, Robert D.; Gallops, George W.; Orme, John S.

    1998-01-01

    Future aircraft turbine engines, both commercial and military, must be able to accommodate expected increased levels of steady-state and dynamic engine-face distortion. The current approach of incorporating sufficient design stall margin to tolerate these increased levels of distortion would significantly reduce performance. The objective of the High Stability Engine Control (HISTEC) program is to design, develop, and flight-demonstrate an advanced, integrated engine control system that uses measurement-based estimates of distortion to enhance engine stability. The resulting distortion tolerant control reduces the required design stall margin, with a corresponding increase in performance and decrease in fuel burn. The HISTEC concept has been developed and was successfully flight demonstrated on the F-15 ACTIVE aircraft during the summer of 1997. The flight demonstration was planned and carried out in two phases, the first to show distortion estimation, and the second to show distortion accommodation. Post-flight analysis shows that the HISTEC technologies are able to successfully estimate and accommodate distortion, transiently setting the stall margin requirement on-line and in real-time. This allows the design stall margin requirement to be reduced, which in turn can be traded for significantly increased performance and/or decreased weight. Flight demonstration of the HISTEC technologies has significantly reduced the risk of transitioning the technology to tactical and commercial engines.

  8. Measurement Properties of Performance-Specific Pain Ratings of Patients Awaiting Total Joint Arthroplasty as a Consequence of Osteoarthritis

    PubMed Central

    Stratford, Paul W.; Kennedy, Deborah M.; Woodhouse, Linda J.; Spadoni, Gregory

    2008-01-01

    Purpose: To estimate the test–retest reliability of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain sub-scale and performance-specific assessments of pain, as well as the association between these measures for patients awaiting primary total hip or knee arthroplasty as a consequence of osteoarthritis. Methods: A total of 164 patients awaiting unilateral primary hip or knee arthroplasty completed four performance measures (self-paced walk, timed up and go, stair test, six-minute walk) and the WOMAC. Scores for 22 of these patients provided test–retest reliability data. Estimates of test–retest reliability (Type 2,1 intraclass correlation coefficient [ICC] and standard error of measurement [SEM]) and the association between measures were examined. Results: ICC values for individual performance-specific pain ratings were between 0.70 and 0.86; SEM values were between 0.97 and 1.33 pain points. ICC estimates for the four-item performance pain ratings and the WOMAC pain sub-scale were 0.82 and 0.57 respectively. The correlation between the sum of the pain scores for the four performance measures and the WOMAC pain sub-scale was 0.62. Conclusion: Reliability estimates for the performance-specific assessments of pain using the numeric pain rating scale were consistent with values reported for patients with a spectrum of musculoskeletal conditions. The reliability estimate for the WOMAC pain sub-scale was lower than typically reported in the literature. The level of association between the WOMAC pain sub-scale and the various performance-specific pain scales suggests that the scores can be used interchangeably when applied to groups but not for individual patients. PMID:20145758

  9. Community Health Centers: Providers, Patients, and Content of Care

    MedlinePlus

    ... Statistics (NCHS). NAMCS uses a multistage probability sample design involving geographic primary sampling units (PSUs), physician practices ... 05 level. To account for the complex sample design during variance estimation, all analyses were performed using ...

  10. Planning for Downtown Circulation Systems. Volume 2. Analysis Techniques.

    DOT National Transportation Integrated Search

    1983-10-01

    This volume contains the analysis and refinement stages of downtown circulator planning. Included are sections on methods for estimating patronage, costs, revenues, and impacts, and a section on methods for performing micro-level analyses.

  11. Decentralization, stabilization, and estimation of large-scale linear systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1976-01-01

    In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

  12. Hypersonic Research Vehicle (HRV) real-time flight test support feasibility and requirements study. Part 1: Real-time flight experiment support

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.; Ramnath, Rudrapatna V.; Vrable, Daniel L.; Hirvo, David H.; Mcmillen, Lowell D.; Osofsky, Irving B.

    1991-01-01

    The results are presented of a study to identify potential real time remote computational applications to support monitoring HRV flight test experiments along with definitions of preliminary requirements. A major expansion of the support capability available at Ames-Dryden was considered. The focus is on the use of extensive computation and data bases together with real time flight data to generate and present high level information to those monitoring the flight. Six examples were considered: (1) boundary layer transition location; (2) shock wave position estimation; (3) performance estimation; (4) surface temperature estimation; (5) critical structural stress estimation; and (6) stability estimation.

  13. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  14. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  15. NASA Instrument Cost Model for Explorer-Like Mission Instruments (NICM-E)

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, Hamid; Fox, George; Mrozinski, Joe; Ball, Gary

    2013-01-01

    NICM-E is a cost estimating relationship that supplements the traditional NICM System Level CERs for instruments flown on NASA Explorer-like missions that have the following three characteristics: 1) fly on Class C missions, 2) major development led and performed by universities or research foundations, and 3) have significant level of inheritance.

  16. Measured and estimated performance of a fleet of shaded photovoltaic systems with string and module-level inverters

    DOE PAGES

    MacAlpine, Sara; Deline, Chris; Dobos, Aron

    2017-03-16

    Shade obstructions can significantly impact the performance of photovoltaic (PV) systems. Although there are many models for partially shaded PV arrays, there is a lack of information available regarding their accuracy and uncertainty when compared with actual field performance. This work assesses the recorded performance of 46 residential PV systems, equipped with either string-level or module-level inverters, under a variety of shading conditions. We compare their energy production data to annual PV performance predictions, with a focus on the practical models developed here for National Renewable Energy Laboratory's system advisor model software. This includes assessment of shade extent on eachmore » PV system by using traditional onsite surveys and newer 3D obstruction modelling. The electrical impact of shade is modelled by either a nonlinear performance model or assumption of linear impact with shade extent, depending on the inverter type. When applied to the fleet of residential PV systems, performance is predicted with median annual bias errors of 2.5% or less, for systems with up to 20% estimated shading loss. The partial shade models are not found to add appreciable uncertainty to annual predictions of energy production for this fleet of systems but do introduce a monthly root-mean-square error of approximately 4%-9% due to seasonal effects. Here the use of a detailed 3D model results in similar or improved accuracy over site survey methods, indicating that, with proper description of shade obstructions, modelling of partially shaded PV arrays can be done completely remotely, potentially saving time and cost.« less

  17. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1998-11-24

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less

  18. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human casualties will also be assembled. The database also contains analytical tools enabling data from similar locations, building classes or ground motion levels to be assembled and thus vulnerability relationships derived for any chosen ground motion parameter, for a given class of building, and for particular countries or regions. The paper presents examples of vulnerability relationships for particular classes of buildings and regions of the world, together with the estimated uncertainty ranges. It will discuss the applicability of such vulnerability functions in earthquake loss assessment for insurance purposes or for earthquake risk mitigation.

  19. Assessment of DEMN-based IM Formulations for Octol Replacement

    DTIC Science & Technology

    2012-08-01

    experimentally for performance in this study. The performance was first assessed numerically using the thermochemical equilibrium code Cheetah , v5.0...Fine Grain Octol (FGO). The Cheetah estimates suggest that the proposed formulations will have lower detonation pressure than Octol level performance...Materials Technology Symposium. 3. Fried, L.E., Howard, W.M., Souers, P.C., and Vitello, P.A. Cheetah 5.0, Energetic Materials Center, Lawrence Livermore

  20. Towards a sampling strategy for the assessment of forest condition at European level: combining country estimates.

    PubMed

    Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo

    2013-04-01

    A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.

  1. Which level of competence and performance is expected? A survey among European employers of public health professionals.

    PubMed

    Vukovic, Dejana; Bjegovic-Mikanovic, Vesna; Otok, Robert; Czabanowska, Katarzyna; Nikolic, Zeljka; Laaser, Ulrich

    2014-02-01

    To explore largely unknown experience and expectations of European employers of public health professionals with regard to competences required to perform in the best way for the public health. A survey targeting employers in Europe was carried out September 2011–October 2012. The web-based questionnaire on public health competences and expected performance levels was returned by 63 organisations out of 109 contacted (57.8 %) as provided by Schools and Departments of Public Health (SDPH) in 30 European countries. The assessment of the current and desired levels of performance did not show significant differences between employer categories. However, current and desired levels across all employers differ significantly (p < 0.001), varying around a difference of one rank of a five-point scale. On the other hand, SDPH rank the exit qualifications of their graduates with one exception (presumed competences in preparedness for public health emergencies) higher than the current performance level as determined by employers, i.e. closer to their expectations. SDPH should reconsider priorities and question their estimate of exit qualifications in close contact with potential employers of their graduates.

  2. Smooth individual level covariates adjustment in disease mapping.

    PubMed

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my; Hannan, M.A., E-mail: hannan@eng.ukm.my; Basri, Hassan

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensormore » intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.« less

  4. Estimating costs and performance of systems for machine processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ballard, R. J.; Eastwood, L. F., Jr.

    1977-01-01

    This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.

  5. Leucine and valine supplementation of low-protein diets for broiler chickens from 21 to 42 days of age.

    PubMed

    Ospina-Rojas, I C; Murakami, A E; Duarte, C R A; Nascimento, G R; Garcia, E R M; Sakamoto, M I; Nunes, R V

    2017-04-01

    The objective of this study was to determine the requirements and interactions between the standardized ileal digestible (SID) Leu and Val levels in low-protein diets, and their effects on performance, serum characteristics, carcass yield and diameter of muscle fibers of broiler chickens from d 21 to 42 posthatch. A total of 1,500 21-day-old Cobb 500 male broiler chickens were distributed in a completely randomized design in a 5 × 5 factorial arrangement for a total of 25 treatments with 3 replicates of 20 birds each. Treatments consisted of 5 SID Leu levels (1.0, 1.2, 1.4, 1.6, or 1.8%) and 5 SID Val levels (0.52, 0.67, 0.82, 0.97, or 1.12%). At 42 d of age, there was interaction (P < 0.05) between the SID levels of Leu and Val on feed intake and weight gain. There was a quadratic effect (P < 0.05) of Leu and Val levels on feed conversion, with minimal point estimated at the levels of 1.19 and 0.86%, respectively. Dietary Leu supplementation reduced linearly (P < 0.05) serum concentrations of triglycerides and β-hydroxybutyrate. Dietary Leu increased (P ≤ 0.05) the fiber diameters of the pectoralis major muscle and breast yield at the levels of 1.24 and 1.13%, respectively, while the thigh yield was improved with the level of 0.71% Val. Abdominal fat decreased linearly (P < 0.05) with increasing levels of dietary Leu and Val. The SID Leu and Val levels needed to optimize weight gain and feed conversion in low-CP diets for broiler chickens from d 21 to 42 posthatch were estimated at 1.15 and 0.86%, and 1.19 and 0.86%, respectively. The supplementation of Leu and Val can reduce the abdominal fat deposition in birds fed low-CP diets during the grower phase. Leu and Val interactions can influence the performance but not the serum characteristics, carcass yield and diameter of muscle fibers of broilers fed low-protein diets. Therefore, it is necessary to consider the dietary Leu content to estimate the ideal level of Val in low-CP diets for optimum broiler performance. © 2016 Poultry Science Association Inc.

  6. Wavelength-dependent ability of solar-induced chlorophyll fluorescence to estimate GPP

    NASA Astrophysics Data System (ADS)

    Liu, L.

    2017-12-01

    Recent studies have demonstrated that solar-induced chlorophyll fluorescence (SIF) can offer a new way for directly estimating the terrestrial gross primary production (GPP). In this paper, the wavelength-dependent ability of SIF to estimate GPP was investigated using both simulations by SCOPE model (Soil Canopy Observation, Photochemistry and Energy fluxes) and observations at the canopy level. Firstly, the response of the remotely sensed SIF at the canopy level to the absorbed photosynthetically active radiation (APAR ) was investigated. Both the simulations and observations confirm a linear relationship between canopy SIF and APAR, while it is species-specific and affected by biochemical components and canopy structure. The ratio of SIF to APAR varies greatly for different vegetation types, which is significant larger for canopy with horizontal structure than it with vertical structure. At red band, the ratio also decreases noticeable when chlorophyll content increases. Then, the performance of SIF to estimate GPP was investigated using diurnal observations of winter wheat at different grow stages. The results showed that the diurnal GPP could be robustly estimated from the SIF spectra for winter wheat at each growth stage, while the correlation weakened greatly at red band if all the observations made at different growth stages or all simulations with different LAI values were pooled together - a situation which did not occur at the far-red band. Finally, the SIF-based GPP models derived from the 2016 observations on winter wheat were well validated using the dataset from 2015, which give better performance for SIF at far-red band than that at red band. Therefore, it is very important to correct for reabsorption and scattering of the SIF radiative transfer from the photosystem to the canopy level before the remotely sensed SIF is linked to the GPP, especially at red band.

  7. Estimation of Particulate Mass and Manganese Exposure Levels among Welders

    PubMed Central

    Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.

    2011-01-01

    Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928

  8. Evaluation of an Outer Loop Retrofit Architecture for Intelligent Turbofan Engine Thrust Control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Sowers, T. Shane

    2006-01-01

    The thrust control capability of a retrofit architecture for intelligent turbofan engine control and diagnostics is evaluated. The focus of the study is on the portion of the hierarchical architecture that performs thrust estimation and outer loop thrust control. The inner loop controls fan speed so the outer loop automatically adjusts the engine's fan speed command to maintain thrust at the desired level, based on pilot input, even as the engine deteriorates with use. The thrust estimation accuracy is assessed under nominal and deteriorated conditions at multiple operating points, and the closed loop thrust control performance is studied, all in a complex real-time nonlinear turbofan engine simulation test bed. The estimation capability, thrust response, and robustness to uncertainty in the form of engine degradation are evaluated.

  9. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  10. On-the-Move Nutrient Delivery System Performance Characteristics

    DTIC Science & Technology

    2008-09-01

    types - ranging from simple sugar (monosaccharide fructose or disaccharide sucrose) to more complex sugars (short length maltodextrin (Grain...characteristics of the NOS Position Chest Chest Chest Pressure, Flow Rate, Glucose Conc in Sip-to-Sip Estimated CHO mm Hg Setting Time, s Volume, ml...on the drink produced (Table 2). When the top of the concentrate bag was level with the bite valve, the drink had an estimated carbohydrate

  11. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    PubMed Central

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-01-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005–2007. PMID:21776223

  12. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  13. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  14. A level set method for multiple sclerosis lesion segmentation.

    PubMed

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Networks consolidation program: Maintenance and Operations (M&O) staffing estimates

    NASA Technical Reports Server (NTRS)

    Goodwin, J. P.

    1981-01-01

    The Mark IV-A consolidate deep space and high elliptical Earth orbiter (HEEO) missions tracking and implements centralized control and monitoring at the deep space communications complexes (DSCC). One of the objectives of the network design is to reduce maintenance and operations (M&O) costs. To determine if the system design meets this objective an M&O staffing model for Goldstone was developed which was used to estimate the staffing levels required to support the Mark IV-A configuration. The study was performed for the Goldstone complex and the program office translated these estimates for the overseas complexes to derive the network estimates.

  16. The perception of performance in stress: the utilisation of cognitive facts by nondepressed and depressed students.

    PubMed

    Fisher, S

    1985-01-01

    Three experiments are reported in which expectancies about performance in stressful conditions by nondepressed and depressed nonclinical populations were examined. The first experiment was concerned with estimates of either errors or response rates made in advance, with regard to the likely competence level of a (hypothetical) person allegedly working in conditions of either loud noise, fatigue, sleep loss, social stress, or incentive. Nondepressed subjects as well as depressed subjects provided negative expectancies. The second experiment involved obtaining an estimate of personal competence in conditions where subjects were instructed that personal performance on the task would be required after the estimate had been provided. Nondepressed subjects differed from depressed subjects in that the estimates of the former were less negative in terms of the magnitude of the estimates provided. A third experiment was designed to see whether the negative expectancies about performance in stress exhibited both by nondepressed and by depressed subjects would be used in making allowances for the competence of a typist on the basis of a typescript allegedly produced under high noise conditions. An unexpected effect was that depressed subjects judged the typist more harshly and failed to make allowance for adverse working conditions in the way that nondepressed subjects did. The results are discussed in terms of the implications for understanding cognitive factors in depression.

  17. Evaluation of the use of performance reference compounds in an oasis-HLB adsorbent based passive sampler for improving water concentration estimates of polar herbicides in freshwater

    USGS Publications Warehouse

    Mazzella, N.; Lissalde, S.; Moreira, S.; Delmas, F.; Mazellier, P.; Huckins, J.N.

    2010-01-01

    Passive samplers such as the Polar Organic Chemical Integrative Sampler (POCIS) are useful tools for monitoring trace levels of polar organic chemicals in aquatic environments. The use of performance reference compounds (PRC) spiked into the POCIS adsorbent for in situ calibration may improve the semiquantitative nature of water concentration estimates based on this type of sampler. In this work, deuterium labeled atrazine-desisopropyl (DIA-d5) was chosen as PRC because of its relatively high fugacity from Oasis HLB (the POCIS adsorbent used) and our earlier evidence of its isotropic exchange. In situ calibration of POCIS spiked with DIA-d5was performed, and the resulting time-weighted average concentration estimates were compared with similar values from an automatic sampler equipped with Oasis HLB cartridges. Before PRC correction, water concentration estimates based on POCIS data sampling ratesfrom a laboratory calibration exposure were systematically lower than the reference concentrations obtained with the automatic sampler. Use of the DIA-d5 PRC data to correct POCIS sampling rates narrowed differences between corresponding values derived from the two methods. Application of PRCs for in situ calibration seems promising for improving POCIS-derived concentration estimates of polar pesticides. However, careful attention must be paid to the minimization of matrix effects when the quantification is performed by HPLC-ESI-MS/MS. ?? 2010 American Chemical Society.

  18. Conclusions on measurement uncertainty in microbiology.

    PubMed

    Forster, Lynne I

    2009-01-01

    Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.

  19. Combining computer adaptive testing technology with cognitively diagnostic assessment.

    PubMed

    McGlohen, Meghan; Chang, Hua-Hua

    2008-08-01

    A major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.

  20. Models of resource allocation optimization when solving the control problems in organizational systems

    NASA Astrophysics Data System (ADS)

    Menshikh, V.; Samorokovskiy, A.; Avsentev, O.

    2018-03-01

    The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.

  1. The reliability and validity of flight task workload ratings

    NASA Technical Reports Server (NTRS)

    Childress, M. E.; Hart, S. G.; Bortolussi, M. R.

    1982-01-01

    Twelve instrument-rated general aviation pilots each flew two scenarios in a motion-base simulator. During each flight, the pilots verbally estimated their workload every three minutes. Following each flight, they again estimated workload for each flight segment and also rated their overall workload, perceived performance, and 13 specific factors on a bipolar scale. The results indicate that time (a priori, inflight, or postflight) of eliciting ratings, period to be covered by the ratings (a specific moment in time or a longer period), type of rating scale, and rating method (verbal, written, or other) may be important variables. Overall workload ratings appear to be predicted by different specific scales depending upon the situation, with activity level the best predictor. Perceived performance seems to bear little relationship to observer-rated performance when pilots rate their overall performance and an observer rates specific behaviors. Perceived workload and performance also seem unrelated.

  2. School Performance: A Matter of Health or Socio-Economic Background? Findings from the PIAMA Birth Cohort Study

    PubMed Central

    Ruijsbroek, Annemarie; Wijga, Alet H.; Gehring, Ulrike; Kerkhof, Marjan; Droomers, Mariël

    2015-01-01

    Background Performance in primary school is a determinant of children’s educational attainment and their socio-economic position and health inequalities in adulthood. We examined the relationship between five common childhood health conditions (asthma symptoms, eczema, general health, frequent respiratory infections, and overweight), health related school absence and family socio-economic status on children’s school performance. Methods We used data from 1,865 children in the Dutch PIAMA birth cohort study. School performance was measured as the teacher’s assessment of a suitable secondary school level for the child, and the child’s score on a standardized achievement test (Cito Test). Both school performance indicators were standardised using Z-scores. Childhood health was indicated by eczema, asthma symptoms, general health, frequent respiratory infections, overweight, and health related school absence. Children’s health conditions were reported repeatedly between the age of one to eleven. School absenteeism was reported at age eleven. Highest attained educational level of the mother and father indicated family socio-economic status. We used linear regression models with heteroskedasticity-robust standard errors for our analyses with adjustment for sex of the child. Results The health indicators used in our study were not associated with children’s school performance, independently from parental educational level, with the exception of asthma symptoms (-0.03 z-score / -0.04 z-score with Cito Test score after adjusting for respectively maternal and paternal education) and missing more than 5 schooldays due to illness (-0.18 z-score with Cito Test score and -0.17 z-score with school level assessment after adjustment for paternal education). The effect estimates for these health indicators were much smaller though than the effect estimates for parental education, which was strongly associated with children’s school performance. Conclusion Children’s school performance was affected only slightly by a number of common childhood health problems, but was strongly associated with parental education. PMID:26247468

  3. School Performance: A Matter of Health or Socio-Economic Background? Findings from the PIAMA Birth Cohort Study.

    PubMed

    Ruijsbroek, Annemarie; Wijga, Alet H; Gehring, Ulrike; Kerkhof, Marjan; Droomers, Mariël

    2015-01-01

    Performance in primary school is a determinant of children's educational attainment and their socio-economic position and health inequalities in adulthood. We examined the relationship between five common childhood health conditions (asthma symptoms, eczema, general health, frequent respiratory infections, and overweight), health related school absence and family socio-economic status on children's school performance. We used data from 1,865 children in the Dutch PIAMA birth cohort study. School performance was measured as the teacher's assessment of a suitable secondary school level for the child, and the child's score on a standardized achievement test (Cito Test). Both school performance indicators were standardised using Z-scores. Childhood health was indicated by eczema, asthma symptoms, general health, frequent respiratory infections, overweight, and health related school absence. Children's health conditions were reported repeatedly between the age of one to eleven. School absenteeism was reported at age eleven. Highest attained educational level of the mother and father indicated family socio-economic status. We used linear regression models with heteroskedasticity-robust standard errors for our analyses with adjustment for sex of the child. The health indicators used in our study were not associated with children's school performance, independently from parental educational level, with the exception of asthma symptoms (-0.03 z-score / -0.04 z-score with Cito Test score after adjusting for respectively maternal and paternal education) and missing more than 5 schooldays due to illness (-0.18 z-score with Cito Test score and -0.17 z-score with school level assessment after adjustment for paternal education). The effect estimates for these health indicators were much smaller though than the effect estimates for parental education, which was strongly associated with children's school performance. Children's school performance was affected only slightly by a number of common childhood health problems, but was strongly associated with parental education.

  4. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  5. The ultimate quantum limits on the accuracy of measurements

    NASA Technical Reports Server (NTRS)

    Yuen, Horace P.

    1992-01-01

    A quantum generalization of rate-distortion theory from standard communication and information theory is developed for application to determining the ultimate performance limit of measurement systems in physics. For the estimation of a real or a phase parameter, it is shown that the root-mean-square error obtained in a measurement with a single-mode photon level N cannot do better than approximately N exp -1, while approximately exp(-N) may be obtained for multi-mode fields with the same photon level N. Possible ways to achieve the remarkable exponential performance are indicated.

  6. Biochemical validation of food frequency questionnaire-estimated carotenoid, alpha-tocopherol, and folate intakes among African Americans and non-Hispanic Whites in the Southern Community Cohort Study.

    PubMed

    Signorello, Lisa B; Buchowski, Maciej S; Cai, Qiuyin; Munro, Heather M; Hargreaves, Margaret K; Blot, William J

    2010-02-15

    Few food frequency questionnaires (FFQs) have been developed specifically for use among African Americans, and reports of FFQ performance among African Americans or low-income groups assessed using biochemical indicators are scarce. The authors conducted a validation study within the Southern Community Cohort Study to evaluate FFQ-estimated intakes of alpha-carotene, beta-carotene, beta-cryptoxanthin, lutein/zeaxanthin, lycopene, folate, and alpha-tocopherol in relation to blood levels of these nutrients. Included were 255 nonsmoking participants (125 African Americans, 130 non-Hispanic whites) who provided a blood sample at the time of study enrollment and FFQ administration in 2002-2004. Levels of biochemical indicators of each micronutrient (alpha-tocopherol among women only) significantly increased with increasing FFQ-estimated intake (adjusted correlation coefficients: alpha-carotene, 0.35; beta-carotene, 0.28; beta-cryptoxanthin, 0.35; lutein/zeaxanthin, 0.28; lycopene, 0.15; folate, 0.26; alpha-tocopherol, 0.26 among women; all P's < 0.05). Subjects in the top decile of FFQ intake had blood levels that were 27% (lycopene) to 178% (beta-cryptoxanthin) higher than those of subjects in the lowest decile. Satisfactory FFQ performance was noted even for participants with less than a high school education. Some variation was noted in the FFQ's ability to predict blood levels for subgroups defined by race, sex, and other characteristics, but overall the Southern Community Cohort Study FFQ appears to generate useful dietary exposure rankings in the cohort.

  7. Examination of universal purchase programs as a driver of vaccine uptake among US States, 1995-2014.

    PubMed

    Mulligan, Karen; Snider, Julia Thornton; Arthur, Phyllis; Frank, Gregory; Tebeka, Mahlet; Walker, Amy; Abrevaya, Jason

    2018-06-01

    Immunization against numerous potentially life-threatening illnesses has been a great public health achievement. In the United States, the Vaccines for Children (VFC) program has provided vaccines to uninsured and underinsured children since the early 1990s, increasing vaccination rates. In recent years, some states have adopted Universal Purchase (UP) programs with the stated aim of further increasing vaccination rates. Under UP programs, states also purchase vaccines for privately-insured children at federally-contracted VFC prices and bill private health insurers for the vaccines through assessments. In this study, we estimated the effect of UP adoption in a state on children's vaccination rates using state-level and individual-level data from the 1995-2014 National Immunization Survey. For the state-level analysis, we performed ordinary least squares regression to estimate the state's vaccination rate as a function of whether the state had UP in the given year, state demographic characteristics, other vaccination policies, state fixed effects, and a time trend. For the individual analysis, we performed logistic regression to estimate a child's likelihood of being vaccinated as a function of whether the state had UP in the given year, the child's demographic characteristics, state characteristics and vaccine policies, state fixed effects, and a time trend. We performed separate regressions for each of nine recommended vaccines, as well as composite measures on whether a child was up-to-date on all required vaccines. In the both the state-level and individual-level analyses, we found UP had no significant (p < 0.10) effect on any of the vaccines or composite measures in our base case specifications. Results were similar in alternative specifications. We hypothesize that UP was ineffective in increasing vaccination rates. Policymakers seeking to increase vaccination rates would do well to consider other policies such as addressing provider practice issues and vaccine hesitancy. Copyright © 2018. Published by Elsevier Ltd.

  8. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  9. Objective assessment of operator performance during ultrasound-guided procedures.

    PubMed

    Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R

    2011-09-01

    Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.

  10. Low-mode internal tides and balanced dynamics disentanglement in altimetric observations: Synergy with surface density observations

    NASA Astrophysics Data System (ADS)

    Ponte, Aurélien L.; Klein, Patrice; Dunphy, Michael; Le Gentil, Sylvie

    2017-03-01

    The performance of a tentative method that disentangles the contributions of a low-mode internal tide on sea level from that of the balanced mesoscale eddies is examined using an idealized high resolution numerical simulation. This disentanglement is essential for proper estimation from sea level of the ocean circulation related to balanced motions. The method relies on an independent observation of the sea surface water density whose variations are 1/dominated by the balanced dynamics and 2/correlate with variations of potential vorticity at depth for the chosen regime of surface-intensified turbulence. The surface density therefore leads via potential vorticity inversion to an estimate of the balanced contribution to sea level fluctuations. The difference between instantaneous sea level (presumably observed with altimetry) and the balanced estimate compares moderately well with the contribution from the low-mode tide. Application to realistic configurations remains to be tested. These results aim at motivating further developments of reconstruction methods of the ocean dynamics based on potential vorticity dynamics arguments. In that context, they are particularly relevant for the upcoming wide-swath high resolution altimetric missions (SWOT).

  11. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  12. Estimate of radiation damage to low-level electronics of the RF system in the LHC cavities arising from beam gas collisions.

    PubMed

    Butterworth, A; Ferrari, A; Tsoulou, E; Vlachoudis, V; Wijnands, T

    2005-01-01

    Monte Carlo simulations have been performed to estimate the radiation damage induced by high-energy hadrons in the digital electronics of the RF low-level systems in the LHC cavities. High-energy hadrons are generated when the proton beams interact with the residual gas. The contributions from various elements-vacuum chambers, cryogenic cavities, wideband pickups and cryomodule beam tubes-have been considered individually, with each contribution depending on the gas composition and density. The probability of displacement damage and single event effects (mainly single event upsets) is derived for the LHC start-up conditions.

  13. Levels and Types of Alcohol Biomarkers in DUI and Clinic Samples for Estimating Workplace Alcohol Problemsa

    PubMed Central

    Marques, Paul R

    2013-01-01

    Widespread concern about illicit drugs as an aspect of workplace performance potentially diminishes attention on employee alcohol use. Alcohol is the dominant drug contributing to poor job performance; it also accounts for a third of the worldwide public health burden. Evidence from public roadways – a workplace for many – provides an example for work-related risk exposure and performance lapses. In most developed countries, alcohol is involved in 20-35% of fatal crashes; drugs other than alcohol are less prominently involved in fatalities. Alcohol biomarkers can improve detection by extending the timeframe for estimating problematic exposure levels and thereby provide better information for managers. But what levels and which markers are right for the workplace? In this report, an established high-sensitivity proxy for alcohol-driving risk proclivity is used: an average 8 months of failed blood alcohol concentration (BAC) breath tests from alcohol ignition interlock devices. Higher BAC test fail rates are known to presage higher rates of future impaired-driving convictions (DUI). Drivers in alcohol interlock programs log 5-7 daily BAC tests; in 12 months, this yields thousands of samples. Also, higher program entry levels of alcohol biomarkers predict a higher likelihood of failed interlock BAC tests during subsequent months. This report summarizes selected biomarkers’ potential for workplace screening. Markers include phosphatidylethanol (PEth), percent carbohydrate deficient transferrin (%CDT), gammaglutamyltransferase (GGT), gamma %CDT (γ%CDT), and ethylglucuronide (EtG) in hair. Clinical cutoff levels and median/mean levels of these markers in abstinent people, the general population, DUI drivers, and rehabilitation clinics are summarized for context. PMID:22311827

  14. Frequency synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Polydoros, A.; Simon, M. K.

    1981-01-01

    This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.

  15. Estimation of Energy Expenditure for Wheelchair Users Using a Physical Activity Monitoring System.

    PubMed

    Hiremath, Shivayogi V; Intille, Stephen S; Kelleher, Annmarie; Cooper, Rory A; Ding, Dan

    2016-07-01

    To develop and evaluate energy expenditure (EE) estimation models for a physical activity monitoring system (PAMS) in manual wheelchair users with spinal cord injury (SCI). Cross-sectional study. University-based laboratory environment, a semistructured environment at the National Veterans Wheelchair Games, and the participants' home environments. Volunteer sample of manual wheelchair users with SCI (N=45). Participants were asked to perform 10 physical activities (PAs) of various intensities from a list. The PAMS consists of a gyroscope-based wheel rotation monitor (G-WRM) and an accelerometer device worn on the upper arm or on the wrist. Criterion EE using a portable metabolic cart and raw sensor data from PAMS were collected during each of these activities. Estimated EE using custom models for manual wheelchair users based on either the G-WRM and arm accelerometer (PAMS-Arm) or the G-WRM and wrist accelerometer (PAMS-Wrist). EE estimation performance for the PAMS-Arm (average error ± SD: -9.82%±37.03%) and PAMS-Wrist (-5.65%±32.61%) on the validation dataset indicated that both PAMS-Arm and PAMS-Wrist were able to estimate EE for a range of PAs with <10% error. Moderate to high intraclass correlation coefficients (ICCs) indicated that the EE estimated by PAMS-Arm (ICC3,1=.82, P<.05) and PAMS-Wrist (ICC3,1=.89, P<.05) are consistent with the criterion EE. Availability of PA monitors can assist wheelchair users to track PA levels, leading toward a healthier lifestyle. The new models we developed can estimate PA levels in manual wheelchair users with SCI in laboratory and community settings. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  16. Population genetics of autopolyploids under a mixed mating model and the estimation of selfing rate.

    PubMed

    Hardy, Olivier J

    2016-01-01

    Nowadays, the population genetics analysis of autopolyploid species faces many difficulties due to (i) limited development of population genetics tools under polysomic inheritance, (ii) difficulties to assess allelic dosage when genotyping individuals and (iii) a form of inbreeding resulting from the mechanism of 'double reduction'. Consequently, few data analysis computer programs are applicable to autopolyploids. To contribute bridging this gap, this article first derives theoretical expectations for the inbreeding and identity disequilibrium coefficients under polysomic inheritance in a mixed mating model. Moment estimators of these coefficients are proposed when exact genotypes or just markers phenotypes (i.e. allelic dosage unknown) are available. This led to the development of estimators of the selfing rate based on adult genotypes or phenotypes and applicable to any even-ploidy level. Their statistical performances and robustness were assessed by numerical simulations. Contrary to inbreeding-based estimators, the identity disequilibrium-based estimator using phenotypes is robust (absolute bias generally < 0.05), even in the presence of double reduction, null alleles or biparental inbreeding due to isolation by distance. A fairly good precision of the selfing rate estimates (root mean squared error < 0.1) is already achievable using a sample of 30-50 individuals phenotyped at 10 loci bearing 5-10 alleles each, conditions reachable using microsatellite markers. Diallelic markers (e.g. SNP) can also perform satisfactorily in diploids and tetraploids but more polymorphic markers are necessary for higher ploidy levels. The method is implemented in the software SPAGeDi and should contribute to reduce the lack of population genetics tools applicable to autopolyploids. © 2015 John Wiley & Sons Ltd.

  17. An Empirical Method for Deriving Grade Equivalence for University Entrance Qualifications: An Application to A Levels and the International Baccalaureate

    ERIC Educational Resources Information Center

    Green, Francis; Vignoles, Anna

    2012-01-01

    We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…

  18. Solar energy system performance evaluation: Seasonal report for Colt Yosemite, Yosemite National Park, California

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The system's operational performance from May 1979 through April 1980 is described. Solar energy satisfied 23 percent of the total performance load, which was significantly below the design value of 56 percent. A fossil savings of 80.89 million Btu's or 578 gallons of fuel oil is estimated. If uncontrolled losses could have been reduced to an inconsequential level, the system's efficiency would have been improved considerably.

  19. Characterization and Performance Evaluation of an HPXe Detector for Nuclear Explosion Monitoring Applications

    DTIC Science & Technology

    2007-09-01

    performance of the detector, and to compare the performance with sodium iodide and germanium detectors. Monte Carlo ( MCNP ) simulation was used to...aluminum ~50% more efficient), and to estimate optimum shield dimensions for an HPXe based nuclear explosion monitor. MCNP modeling was also used to...detector were calculated with MCNP by using input activity levels as measured in routine NEM runs at Pacific Northwest National Laboratory (PNNL

  20. HRV based health&sport markers using video from the face.

    PubMed

    Capdevila, Lluis; Moreno, Jordi; Movellan, Javier; Parrado, Eva; Ramos-Castro, Juan

    2012-01-01

    Heart Rate Variability (HRV) is an indicator of health status in the general population and of adaptation to stress in athletes. In this paper we compare the performance of two systems to measure HRV: (1) A commercial system based on recording the physiological cardiac signal with (2) A computer vision system that uses a standard video images of the face to estimate RR from changes in skin color of the face. We show that the computer vision system performs surprisingly well. It estimates individual RR intervals in a non-invasive manner and with error levels comparable to those achieved by the physiological based system.

  1. Simulated performance of an order statistic threshold strategy for detection of narrowband signals

    NASA Technical Reports Server (NTRS)

    Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.

    1988-01-01

    The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.

  2. Bone-specific alkaline phosphatase - a potential biomarker for skeletal growth assessment.

    PubMed

    Tripathi, Tulika; Gupta, Prateek; Sharma, Jitender; Rai, Priyank; Gupta, Vinod Kumar; Singh, Navneet

    2018-03-01

    The present study was aimed to assess levels of serum Bone-specific alkaline phosphatase (BALP) and serum Insulin-like growth factor-1 (IGF-1) and comparing with cervical vertebral maturation index (CVMI) stages. Cross-sectional study. Maulana Azad Institute of Dental Sciences, New Delhi, India. 150 subjects (75 males and 75 females) in the age group of 8-20 years. Subjects were divided into six CVMI stages. Enzyme-linked immunosorbant assay was performed for the estimation of serum BALP and serum IGF-1 levels. Mann-Whitney U test was performed to compare mean ranks of serum BALP and serum IGF-1 with different CVMI stages. Spearman correlation between serum BALP and serum IGF-1 was done across 6 CVMI stages. Peak serum IGF-1 levels were found at CVMI stages 4 and 3 for males and females respectively. Peak levels for serum BALP were found at stage 3 for both genders with significant differences from other stages. A statistically significant correlation was seen between serum IGF-1 and serum BALP from CVMI stages 1 to 3 and 4 to 6 (p < .01). BALP showed promising results and can be employed as a potential biomarker for the estimation of growth status.

  3. Relationships between maximal anaerobic power of the arms and legs and javelin performance.

    PubMed

    Bouhlel, E; Chelly, M S; Tabka, Z; Shephard, R

    2007-06-01

    The aim of this study was to examine relationships between maximal anaerobic power, as measured by leg and arm force-velocity tests, estimates of local muscle volume and javelin performance. Ten trained national level male javelin throwers (mean age 19.6+/- 2 years) participated in this study. Maximal anaerobic power, maximal force and maximal velocity were measured during leg (Wmax-L) and arm (Wmax-A) force-velocity tests, performed on appropriately modified forms of Monark cycle ergometer. Estimates of leg and arm muscle volume were made using a standard anthropometric kit. Maximal force of the leg (Fmax-L) was significantly correlated with estimated leg muscle volume (r=0.71, P<0.05). Wmax-L and Wmax-A were both significantly correlated with javelin performance (r=0.76, P<0.01; r=0.71, P <0.05, respectively). Maximal velocity of the leg (Vmax-L) was also significantly correlated with throwing performance (r=0.83; P<0.001). Wmax of both legs and arms were significantly correlated with javelin performance, the closest correlation being for Wmax-L; this emphasizes the importance of the leg muscles in this sport. Fmax-L and Vmax-L were related to muscle volume and to javelin performance, respectively. Force-velocity testing may have value in regulating conditioning and rehabilitation in sports involving throwing.

  4. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies.

    PubMed

    Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.

  5. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies

    PubMed Central

    Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007

  6. Quantification of Campylobacter jejuni contamination on chicken carcasses in France.

    PubMed

    Duqué, Benjamin; Daviaud, Samuel; Guillou, Sandrine; Haddad, Nabila; Membré, Jeanne-Marie

    2018-04-01

    Highly prevalent in poultry, Campylobacter is a foodborne pathogen which remains the primary cause of enteritis in humans. Several studies have determined prevalence and contamination level of this pathogen throughout the food chain. However it is generally performed in a deterministic way without considering heterogeneity of contamination level. The purpose of this study was to quantify, using probabilistic tools, the contamination level of Campylobacter spp. on chicken carcasses after air-chilling step in several slaughterhouses in France. From a dataset (530 data) containing censored data (concentration <10CFU/g), several factors were considered, including the month of sampling, the farming method (standard vs certified) and the sampling area (neck vs leg). All probabilistic analyses were performed in R using fitdistrplus, mc2d and nada packages. The uncertainty (i.e. error) generated by the presence of censored data was small (ca 1 log 10 ) in comparison to the variability (i.e. heterogeneity) of contamination level (3 log 10 or more), strengthening the probabilistic analysis and facilitating result interpretation. The sampling period and sampling area (neck/leg) had a significant effect on Campylobacter contamination level. More precisely, two "seasons" were distinguished: one from January to May, another one from June to December. During the June-to-December season, the mean Campylobacter concentration was estimated to 2.6 [2.4; 2.8] log 10 (CFU/g) and 1.8 [1.5; 2.0] log 10 (CFU/g) for neck and leg, respectively. The probability of having >1000CFU/g (higher limit of European microbial criterion) was estimated to 35.3% and 12.6%, for neck and leg, respectively. In contrast, during January-to-May season, the mean contamination level was estimated to 1.0 [0.6; 1.3] log 10 (CFU/g) and 0.6 [0.3; 0.9] log 10 (CFU/g) for neck and leg, respectively. The probability of having >1000CFU/g was estimated to 13.5% and 2.0% for neck and leg, respectively. An accurate quantification of contamination level enables industrials to better adapt their processing and hygiene practices. These results will also help in refining exposure assessment models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  8. Modeling the Psychometric Properties of Complex Performance Assessment Tasks Using Confirmatory Factor Analysis: A Multistage Model for Calibrating Tasks

    ERIC Educational Resources Information Center

    Kahraman, Nilufer; De Champlain, Andre; Raymond, Mark

    2012-01-01

    Item-level information, such as difficulty and discrimination are invaluable to the test assembly, equating, and scoring practices. Estimating these parameters within the context of large-scale performance assessments is often hindered by the use of unbalanced designs for assigning examinees to tasks and raters because such designs result in very…

  9. Regional groundwater characteristics and hydraulic conductivity based on geological units in Korean peninsula

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Suk, H.

    2011-12-01

    In this study, about 2,000 deep observation wells, stream and/or river distribution, and river's density were analyzed to identify regional groundwater flow trend, based on the regional groundwater survey of four major river watersheds including Geum river, Han river, Youngsan-Seomjin river, and Nakdong river in Korea. Hydrogeologial data were collected to analyze regional groundwater flow characteristics according to geological units. Additionally, hydrological soil type data were collected to estimate direct runoff through SCS-CN method. Temperature and precipitation data were used to quantify infiltration rate. The temperature and precipitation data were also used to quantify evaporation by Thornthwaite method and to evaluate groundwater recharge, respectively. Understanding the regional groundwater characteristics requires the database of groundwater flow parameters, but most hydrogeological data include limited information such as groundwater level and well configuration. In this study, therefore, groundwater flow parameters such as hydraulic conductivities or transmissivities were estimated using observed groundwater level by inverse model, namely PEST (Non-linear Parameter ESTimation). Since groundwater modeling studies have some uncertainties in data collection, conceptualization, and model results, model calibration should be performed. The calibration may be manually performed by changing parameters step by step, or various parameters are simultaneously changed by automatic procedure using PEST program. In this study, both manual and automatic procedures were employed to calibrate and estimate hydraulic parameter distributions. In summary, regional groundwater survey data obtained from four major river watersheds and various data of hydrology, meteorology, geology, soil, and topography in Korea were used to estimate hydraulic conductivities using PEST program. Especially, in order to estimate hydraulic conductivity effectively, it is important to perform in such a way that areas of same or similar hydrogeological characteristics should be grouped into zones. Keywords: regional groundwater, database, hydraulic conductivity, PEST, Korean peninsular Acknowledgements: This work was supported by the Radioactive Waste Management of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (2011T100200152)

  10. Hotspot and sampling analysis for effective maintenance and performance monitoring.

    DOT National Transportation Integrated Search

    2017-05-01

    In this project, we propose two sampling methods addressing how much and where the agencies need to collect infrastraucture condition data for accurate Level-of-Maintenance (LOM) estimation in maintenance network with single type or multiple ty...

  11. Road weather management performance measures : 2012 update.

    DOT National Transportation Integrated Search

    1997-01-01

    The goal of the cost analysis of the ITS National Architecture program is twofold. First, the evaluation is to produce a high-level estimate of the expenditures associated with implementing the physical elements and the functional capabilities of ITS...

  12. Sea Surface Temperature Products and Research Associated with GHRSST

    NASA Astrophysics Data System (ADS)

    Kaiser-Weiss, Andrea K.; Minnett, Peter J.; Kaplan, Alexey; Wick, Gary A.; Castro, Sandra; Llewellyn-Jones, David; Merchant, Chris; LeBorgne, Pierre; Beggs, Helen; Donlon, Craig J.

    2012-03-01

    GHRSST serves its user community through the specification of operational Sea Surface Temperature (SST) products (Level 2, Level 3 and Level 4) based on international consensus. Providers of SST data from individual satellites create and deliver GHRSST-compliant near-real time products to a global GHRSST data assembly centre and a long-term stewardship facility. The GHRSST-compliant data include error estimates and supporting data for interpretation. Groups organised within GHRSST perform research on issues relevant to applying SST for air-sea exchange, for instance the Diurnal Variability Working Group (DVWG) analyses the evolution of the skin temperature. Other GHRSST groups concentrate on improving the SST estimate (Estimation and Retrievals Working Group EARWiG) and on improving the error characterization, (Satellite SST Validation Group, ST-VAL) and on improving the methods for SST analysis (Inter-Comparison Technical Advisory Group, IC-TAG). In this presentation we cover the data products and the scientific activities associated with GHRSST which might be relevant for investigating ocean-atmosphere interactions.

  13. Uncertainty analysis for low-level radioactive waste disposal performance assessment at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, D.W.; Yambert, M.W.; Kocher, D.C.

    1994-12-31

    A performance assessment of the operating Solid Waste Storage Area 6 (SWSA 6) facility for the disposal of low-level radioactive waste at the Oak Ridge National Laboratory has been prepared to provide the technical basis for demonstrating compliance with the performance objectives of DOE Order 5820.2A, Chapter 111.2 An analysis of the uncertainty incorporated into the assessment was performed which addressed the quantitative uncertainty in the data used by the models, the subjective uncertainty associated with the models used for assessing performance of the disposal facility and site, and the uncertainty in the models used for estimating dose and humanmore » exposure. The results of the uncertainty analysis were used to interpret results and to formulate conclusions about the performance assessment. This paper discusses the approach taken in analyzing the uncertainty in the performance assessment and the role of uncertainty in performance assessment.« less

  14. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.

    PubMed

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-11-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.

  15. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08

    PubMed Central

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-01-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127

  16. Shelter availability, stress level and digestive performance in the aspic viper.

    PubMed

    Bonnet, Xavier; Fizesan, Alain; Michel, Catherine Louise

    2013-03-01

    The lack of shelter can perturb behaviors, increase stress level and thus alter physiological performance (e.g. digestive, immune or reproductive functions). Although intuitive, such potential impacts of lack of shelter remain poorly documented. We manipulated shelter availability and environmental and physiological variables (i.e. access to a heat source, predator attack, feeding status) in a viviparous snake, and assessed sun-basking behavior, digestive performance (i.e. digestive transit time, crude estimate of assimilation, regurgitation rate) and plasma corticosterone levels (a proxy of stress level). Shelter deprivation provoked a strong increase in sun-basking behavior and thus elevated body temperature, even in unfed individuals for which energy savings would have been otherwise beneficial. The lack of heat was detrimental to digestive performance; simulated predator attacks worsened the situation and entailed a further deterioration of digestion. The combination of the lack of shelter with cool ambient temperatures markedly elevated basal corticosterone level and was associated with low digestive performance. This hormonal effect was absent when only one negative factor was involved, suggesting a threshold response. Overall, our results revealed important non-linear cascading impacts of shelter availability on stress-hormone levels, behaviors and physiological performance. These results infer that shelter availability is important for laboratory studies, captive husbandry and possibly conservation plans.

  17. Survey of patulin occurrence in apple juice and apple products in Catalonia, Spain, and an estimate of dietary intake.

    PubMed

    Cano-Sancho, G; Marin, S; Ramos, A J; Sanchis, V

    2009-01-01

    This study was conducted to assess patulin exposure in the Catalonian population. Patulin levels were determined in 161 apple juice samples, 77 solid apple-based food samples and 146 apple-based baby food samples obtained from six hypermarkets and supermarkets from twelve main cities of Catalonia, Spain. Patulin was analysed by a well-established validated method involving ethyl acetate extraction and direct analysis by high-performance liquid chromatography (HPLC) with ultraviolet light detection. Mean patulin levels for positive samples in apple juice, solid apple-based food and apple-based baby food were 8.05, 13.54 and 7.12 µg kg(-1), respectively. No samples exceeded the maximum permitted levels established by European Union regulation. Dietary intake was separately assessed for babies, infants and adults through a Food Frequency Questionnaire developed from 1056 individuals from Catalonia. Babies were the main group exposed to patulin, however no risk was detected at these levels of contamination. Adults and infants consumers were far from risk levels. Another approach to determine estimated exposure was conducted through Monte Carlo simulation that distinguishes variability in exposures from uncertainty of distributional parameter estimates.

  18. Reconstruction of the 3-D Dynamics From Surface Variables in a High-Resolution Simulation of North Atlantic

    NASA Astrophysics Data System (ADS)

    Fresnay, S.; Ponte, A. L.; Le Gentil, S.; Le Sommer, J.

    2018-03-01

    Several methods that reconstruct the three-dimensional ocean dynamics from sea level are presented and evaluated in the Gulf Stream region with a 1/60° realistic numerical simulation. The use of sea level is motivated by its better correlation with interior pressure or quasi-geostrophic potential vorticity (PV) compared to sea surface temperature and sea surface salinity, and, by its observability via satellite altimetry. The simplest method of reconstruction relies on a linear estimation of pressure at depth from sea level. Another method consists in linearly estimating PV from sea level first and then performing a PV inversion. The last method considered, labeled SQG for surface quasi-geostrophy, relies on a PV inversion but assumes no PV anomalies. The first two methods show comparable skill at levels above -800 m. They moderately outperform SQG which emphasizes the difficulty of estimating interior PV from surface variables. Over the 250-1,000 m depth range, the three methods skillfully reconstruct pressure at wavelengths between 500 and 200 km whereas they exhibit a rapid loss of skill between 200 and 100 km wavelengths. Applicability to a real case scenario and leads for improvements are discussed.

  19. Sensitivity and Specificity Estimation for the Clinical Diagnosis of Highly Pathogenic Avian Influenza in the Egyptian Participatory Disease Surveillance Program.

    PubMed

    Verdugo, C; El Masry, I; Makonnen, Y; Hannah, H; Unger, F; Soliman, M; Galal, S; Lubroth, J; Grace, D

    2016-12-01

    Many developing countries lack sufficient resources to conduct animal disease surveillance. In recent years, participatory epidemiology has been used to increase the cover and decrease the costs of surveillance. However, few diagnostic performance assessments have been carried out on participatory methods. The objective of the present study was to estimate the diagnostic performance of practitioners working for the Community-Based Animal Health and Outreach (CAHO) program, which is a participatory disease surveillance system for the detection of highly pathogenic avian influenza outbreaks in Egypt. CAHO practitioners' diagnostic assessment of inspected birds was compared with real-time reverse-transcriptase polymerase chain reaction (RRT-PCR) test results at the household level. Diagnostic performance was estimated directly from two-by-two tables using RRT-PCR as a reference test in two different scenarios. In the first scenario, only results from chickens were considered. In the second scenario, results for all poultry species were analyzed. Poultry flocks in 916 households located in 717 villages were inspected by CAHO practitioners, who collected 3458 bird samples. In the first scenario, CAHO practitioners presented sensitivity (Se) and specificity (Sp) estimates of 40% (95% confidence interval [CI]: 21%-59%) and 92% (95% CI: 91%-94%), respectively. In the second scenario, diagnostic performance estimates were Se = 47% (95% CI: 29%-65%) and Sp = 88% (95% CI: 86%-90%). A significant difference was observed only between Sp estimates (P < 0.01). Practitioners' diagnostics and RRT-PCR results were in very poor agreement with kappa values of 0.16 and 0.14 for scenarios 1 and 2, respectively. However, the use of a broad case definition, the possible presence of immunity against the virus in replacement birds, and the low prevalence observed during the survey would negatively affect the practitioners' performance.

  20. Comparisons of two moments‐based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    USGS Publications Warehouse

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-01-01

    The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  1. Comparisons of two moments-based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    NASA Astrophysics Data System (ADS)

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-09-01

    The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  2. Human joint motion estimation for electromyography (EMG)-based dynamic motion control.

    PubMed

    Zhang, Qin; Hosoda, Ryo; Venture, Gentiane

    2013-01-01

    This study aims to investigate a joint motion estimation method from Electromyography (EMG) signals during dynamic movement. In most EMG-based humanoid or prosthetics control systems, EMG features were directly or indirectly used to trigger intended motions. However, both physiological and nonphysiological factors can influence EMG characteristics during dynamic movements, resulting in subject-specific, non-stationary and crosstalk problems. Particularly, when motion velocity and/or joint torque are not constrained, joint motion estimation from EMG signals are more challenging. In this paper, we propose a joint motion estimation method based on muscle activation recorded from a pair of agonist and antagonist muscles of the joint. A linear state-space model with multi input single output is proposed to map the muscle activity to joint motion. An adaptive estimation method is proposed to train the model. The estimation performance is evaluated in performing a single elbow flexion-extension movement in two subjects. All the results in two subjects at two load levels indicate the feasibility and suitability of the proposed method in joint motion estimation. The estimation root-mean-square error is within 8.3% ∼ 10.6%, which is lower than that being reported in several previous studies. Moreover, this method is able to overcome subject-specific problem and compensate non-stationary EMG properties.

  3. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  4. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  5. Are subjective memory problems related to suggestibility, compliance, false memories, and objective memory performance?

    PubMed

    Van Bergen, Saskia; Jelicic, Marko; Merckelbach, Harald

    2009-01-01

    The relationship between subjective memory beliefs and suggestibility, compliance, false memories, and objective memory performance was studied in a community sample of young and middle-aged people (N = 142). We hypothesized that people with subjective memory problems would exhibit higher suggestibility and compliance levels and would be more susceptible to false recollections than those who are optimistic about their memory. In addition, we expected a discrepancy between subjective memory judgments and objective memory performance. We found that subjective memory judgments correlated significantly with compliance, with more negative memory judgments accompanying higher levels of compliance. Contrary to our expectation, subjective memory problems did not correlate with suggestibility or false recollections. Furthermore, participants were accurate in estimating their objective memory performance.

  6. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  7. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2015-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  8. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    PubMed

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  9. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  10. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  11. Aerodynamics and Control of Quadrotors

    NASA Astrophysics Data System (ADS)

    Bangura, Moses

    Quadrotors are aerial vehicles with a four motor-rotor assembly for generating lift and controllability. Their light weight, ease of design and simple dynamics have increased their use in aerial robotics research. There are many quadrotors that are commercially available or under development. Commercial off-the-shelf quadrotors usually lack the ability to be reprogrammed and are unsuitable for use as research platforms. The open-source code developed in this thesis differs from other open-source systems by focusing on the key performance road blocks in implementing high performance experimental quadrotor platforms for research: motor-rotor control for thrust regulation, velocity and attitude estimation, and control for position regulation and trajectory tracking. In all three of these fundamental subsystems, code sub modules for implementation on commonly available hardware are provided. In addition, the thesis provides guidance on scoping and commissioning open-source hardware components to build a custom quadrotor. A key contribution of the thesis is then a design methodology for the development of experimental quadrotor platforms from open-source or commercial off-the-shelf software and hardware components that have active community support. Quadrotors built following the methodology allows the user access to the operation of the subsystems and, in particular, the user can tune the gains of the observers and controllers in order to push the overall system to its performance limits. This enables the quadrotor framework to be used for a variety of applications such as heavy lifting and high performance aggressive manoeuvres by both the hobby and academic communities. To address the question of thrust control, momentum and blade element theories are used to develop aerodynamic models for rotor blades specific to quadrotors. With the aerodynamic models, a novel thrust estimation and control scheme that improves on existing RPM (revolutions per minute) control of rotors is proposed. The approach taken uses the measured electrical power into the rotors compensating for electrical loses, to estimate changing aerodynamic conditions around a rotor as well as the aerodynamic thrust force. The resulting control algorithms are implemented in real-time on the embedded electronic speed controller (ESC) hardware. Using the estimates of the aerodynamic conditions around the rotor at this level improves the dynamic response to gust as the low-level thrust control is the fastest dynamic level on the vehicle. The aerodynamic estimation scheme enables the vehicle to react almost instantaneously to aerodynamic changes in the environment without affecting the overall dynamic performance of the vehicle. (Abstract shortened by ProQuest.).

  12. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations

    PubMed Central

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M.; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886

  13. Development and validation of chemistry agnostic flow battery cost performance model and application to nonaqueous electrolyte systems: Chemistry agnostic flow battery cost performance model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Alasdair; Thomsen, Edwin; Reed, David

    2016-04-20

    A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less

  14. Implementation Of Fuzzy Approach To Improve Time Estimation [Case Study Of A Thermal Power Plant Is Considered

    NASA Astrophysics Data System (ADS)

    Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.

    2010-10-01

    Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.

  15. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    PubMed Central

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  16. Update on HCDstruct - A Tool for Hybrid Wing Body Conceptual Design and Structural Optimization

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2015-01-01

    HCDstruct is a Matlab® based software tool to rapidly build a finite element model for structural optimization of hybrid wing body (HWB) aircraft at the conceptual design level. The tool uses outputs from a Flight Optimization System (FLOPS) performance analysis together with a conceptual outer mold line of the vehicle, e.g. created by Vehicle Sketch Pad (VSP), to generate a set of MSC Nastran® bulk data files. These files can readily be used to perform a structural optimization and weight estimation using Nastran’s® Solution 200 multidisciplinary optimization solver. Initially developed at NASA Langley Research Center to perform increased fidelity conceptual level HWB centerbody structural analyses, HCDstruct has grown into a complete HWB structural sizing and weight estimation tool, including a fully flexible aeroelastic loads analysis. Recent upgrades to the tool include the expansion to a full wing tip-to-wing tip model for asymmetric analyses like engine out conditions and dynamic overswings, as well as a fully actuated trailing edge, featuring up to 15 independently actuated control surfaces and twin tails. Several example applications of the HCDstruct tool are presented.

  17. Apparatus for sensor failure detection and correction in a gas turbine engine control system

    NASA Technical Reports Server (NTRS)

    Spang, H. A., III; Wanger, R. P. (Inventor)

    1981-01-01

    A gas turbine engine control system maintains a selected level of engine performance despite the failure or abnormal operation of one or more engine parameter sensors. The control system employs a continuously updated engine model which simulates engine performance and generates signals representing real time estimates of the engine parameter sensor signals. The estimate signals are transmitted to a control computational unit which utilizes them in lieu of the actual engine parameter sensor signals to control the operation of the engine. The estimate signals are also compared with the corresponding actual engine parameter sensor signals and the resulting difference signals are utilized to update the engine model. If a particular difference signal exceeds specific tolerance limits, the difference signal is inhibited from updating the model and a sensor failure indication is provided to the engine operator.

  18. Vertical land motion controls regional sea level rise patterns on the United States east coast since 1900

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Hay, C.; Mitrovica, J. X.; Little, C. M.; Ponte, R. M.; Tingley, M.

    2017-12-01

    Understanding observed spatial variations in centennial relative sea level trends on the United States east coast has important scientific and societal applications. Past studies based on models and proxies variously suggest roles for crustal displacement, ocean dynamics, and melting of the Greenland ice sheet. Here we perform joint Bayesian inference on regional relative sea level, vertical land motion, and absolute sea level fields based on tide gauge records and GPS data. Posterior solutions show that regional vertical land motion explains most (80% median estimate) of the spatial variance in the large-scale relative sea level trend field on the east coast over 1900-2016. The posterior estimate for coastal absolute sea level rise is remarkably spatially uniform compared to previous studies, with a spatial average of 1.4-2.3 mm/yr (95% credible interval). Results corroborate glacial isostatic adjustment models and reveal that meaningful long-period, large-scale vertical velocity signals can be extracted from short GPS records.

  19. Diagnostic value of potassium level in a spot urine sample as an index of 24-hour urinary potassium excretion in unselected patients hospitalized in a hypertension unit

    PubMed Central

    Symonides, Bartosz; Wojciechowska, Ewa; Gryglas, Adam; Gaciong, Zbigniew

    2017-01-01

    Background Primary hyperaldosteronism may be associated with elevated 24-hour urinary potassium excretion. We evaluated the diagnostic value of spot urine (SU) potassium as an index of 24-hour urinary potassium excretion. Methods We measured SU and 24-hour urinary collection potassium and creatinine in 382 patients. Correlations between SU and 24-hour collections were assessed for potassium levels and potassium/creatinine ratios. We used the PAHO formula to estimate 24-hour urinary potassium excretion based on SU potassium level. The agreement between estimated and measured 24-hour urinary potassium excretion was evaluated using the Bland-Altman method. To evaluate diagnostic performance of SU potassium, we calculated areas under the curve (AUC) for SU potassium/creatinine ratio and 24-hour urinary potassium excretion estimated using the PAHO formula. Results Strongest correlation between SU and 24-hour collection was found for potassium/creatinine ratio (r = 0.69, P<0.001). The PAHO formula underestimated 24-hour urinary potassium excretion by mean 8.3±18 mmol/d (95% limits of agreement -28 to +44 mmol/d). Diagnostic performance of SU potassium/creatinine ratio was borderline good only if 24-hour urinary potassium excretion was largely elevated (AUC 0.802 for 120 mmol K+/24 h) but poor with lower values (AUC 0.696 for 100 mmol K+/24 h, 0.636 for 80 mmol K+/24 h, 0.675 for 40 mmol K+/24 h). Diagnostic performance of 24-hour urinary potassium excretion estimated by the PAHO formula was excellent with values above 120 mmol/d and good with lower values (AUC 0.941 for 120 mmol K+/24 h, 0.819 for 100 mmol K+/24 h, 0.823 for 80 mmol K+/24 h, 0.836 for 40 mmol K+/24 h). Conclusions Spot urine potassium/creatinine ratio might be a marker of increased 24-hour urinary potassium excretion and a potentially useful screening test when reliable 24-hour urine collection is not available. The PAHO formula allowed estimation of the 24-hour urinary potassium excretion based on SU measurements with reasonable clinical accuracy. PMID:28662194

  20. A robust measure of HIV-1 population turnover within chronically infected individuals.

    PubMed

    Achaz, G; Palmer, S; Kearney, M; Maldarelli, F; Mellors, J W; Coffin, J M; Wakeley, J

    2004-10-01

    A simple nonparameteric test for population structure was applied to temporally spaced samples of HIV-1 sequences from the gag-pol region within two chronically infected individuals. The results show that temporal structure can be detected for samples separated by about 22 months or more. The performance of the method, which was originally proposed to detect geographic structure, was tested for temporally spaced samples using neutral coalescent simulations. Simulations showed that the method is robust to variation in samples sizes and mutation rates, to the presence/absence of recombination, and that the power to detect temporal structure is high. By comparing levels of temporal structure in simulations to the levels observed in real data, we estimate the effective intra-individual population size of HIV-1 to be between 10(3) and 10(4) viruses, which is in agreement with some previous estimates. Using this estimate and a simple measure of sequence diversity, we estimate an effective neutral mutation rate of about 5 x 10(-6) per site per generation in the gag-pol region. The definition and interpretation of estimates of such "effective" population parameters are discussed.

  1. A method for estimating cost savings for population health management programs.

    PubMed

    Murphy, Shannon M E; McGready, John; Griswold, Michael E; Sylvia, Martha L

    2013-04-01

    To develop a quasi-experimental method for estimating Population Health Management (PHM) program savings that mitigates common sources of confounding, supports regular updates for continued program monitoring, and estimates model precision. Administrative, program, and claims records from January 2005 through June 2009. Data are aggregated by member and month. Study participants include chronically ill adult commercial health plan members. The intervention group consists of members currently enrolled in PHM, stratified by intensity level. Comparison groups include (1) members never enrolled, and (2) PHM participants not currently enrolled. Mixed model smoothing is employed to regress monthly medical costs on time (in months), a history of PHM enrollment, and monthly program enrollment by intensity level. Comparison group trends are used to estimate expected costs for intervention members. Savings are realized when PHM participants' costs are lower than expected. This method mitigates many of the limitations faced using traditional pre-post models for estimating PHM savings in an observational setting, supports replication for ongoing monitoring, and performs basic statistical inference. This method provides payers with a confident basis for making investment decisions. © Health Research and Educational Trust.

  2. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    PubMed

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  3. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2017-12-01

    Accurate characterization of uncertainties in space-borne precipitation estimates is critical for many applications including water budget studies or prediction of natural hazards at the global scale. The GPM precipitation Level II (active and passive) and Level III (IMERG) estimates are compared to the high quality and high resolution NEXRAD-based precipitation estimates derived from the NOAA/NSSL's Multi-Radar, Multi-Sensor (MRMS) platform. A surface reference is derived from the MRMS suite of products to be accurate with known uncertainty bounds and measured at a resolution below the pixel sizes of any GPM estimate, providing great flexibility in matching to grid scales or footprints. It provides an independent and consistent reference research framework for directly evaluating GPM precipitation products across a large number of meteorological regimes as a function of resolution, accuracy and sample size. The consistency of the ground and space-based sensors in term of precipitation detection, typology and quantification are systematically evaluated. Satellite precipitation retrievals are further investigated in terms of precipitation distributions, systematic biases and random errors, influence of precipitation sub-pixel variability and comparison between satellite products. Prognostic analysis directly provides feedback to algorithm developers on how to improve the satellite estimates. Specific factors for passive (e.g. surface conditions for GMI) and active (e.g. non uniform beam filling for DPR) sensors are investigated. This cross products characterization acts as a bridge to intercalibrate microwave measurements from the GPM constellation satellites and propagate to the combined and global precipitation estimates. Precipitation features previously used to analyze Level II satellite estimates under various precipitation processes are now intoduced for Level III to test several assumptions in the IMERG algorithm. Specifically, the contribution of Level II is explicitly characterized and a rigorous characterization is performed to migrate across scales fully understanding the propagation of errors from Level II to Level III. Perpectives are presented to advance the use of uncertainty as an integral part of QPE for ground-based and space-borne sensors

  4. Validating a High Performance Liquid Chromatography-Ion Chromatography (HPLC-IC) Method with Conductivity Detection After Chemical Suppression for Water Fluoride Estimation.

    PubMed

    Bondu, Joseph Dian; Selvakumar, R; Fleming, Jude Joseph

    2018-01-01

    A variety of methods, including the Ion Selective Electrode (ISE), have been used for estimation of fluoride levels in drinking water. But as these methods suffer many drawbacks, the newer method of IC has replaced many of these methods. The study aimed at (1) validating IC for estimation of fluoride levels in drinking water and (2) to assess drinking water fluoride levels of villages in and around Vellore district using IC. Forty nine paired drinking water samples were measured using ISE and IC method (Metrohm). Water samples from 165 randomly selected villages in and around Vellore district were collected for fluoride estimation over 1 year. Standardization of IC method showed good within run precision, linearity and coefficient of variance with correlation coefficient R 2  = 0.998. The limit of detection was 0.027 ppm and limit of quantification was 0.083 ppm. Among 165 villages, 46.1% of the villages recorded water fluoride levels >1.00 ppm from which 19.4% had levels ranging from 1 to 1.5 ppm, 10.9% had recorded levels 1.5-2 ppm and about 12.7% had levels of 2.0-3.0 ppm. Three percent of villages had more than 3.0 ppm fluoride in the water tested. Most (44.42%) of these villages belonged to Jolarpet taluk with moderate to high (0.86-3.56 ppm) water fluoride levels. Ion Chromatography method has been validated and is therefore a reliable method in assessment of fluoride levels in the drinking water. While the residents of Jolarpet taluk (Vellore distict) are found to be at a high risk of developing dental and skeletal fluorosis.

  5. Multivariate spatial models of excess crash frequency at area level: case of Costa Rica.

    PubMed

    Aguero-Valverde, Jonathan

    2013-10-01

    Recently, areal models of crash frequency have being used in the analysis of various area-wide factors affecting road crashes. On the other hand, disease mapping methods are commonly used in epidemiology to assess the relative risk of the population at different spatial units. A natural next step is to combine these two approaches to estimate the excess crash frequency at area level as a measure of absolute crash risk. Furthermore, multivariate spatial models of crash severity are explored in order to account for both frequency and severity of crashes and control for the spatial correlation frequently found in crash data. This paper aims to extent the concept of safety performance functions to be used in areal models of crash frequency. A multivariate spatial model is used for that purpose and compared to its univariate counterpart. Full Bayes hierarchical approach is used to estimate the models of crash frequency at canton level for Costa Rica. An intrinsic multivariate conditional autoregressive model is used for modeling spatial random effects. The results show that the multivariate spatial model performs better than its univariate counterpart in terms of the penalized goodness-of-fit measure Deviance Information Criteria. Additionally, the effects of the spatial smoothing due to the multivariate spatial random effects are evident in the estimation of excess equivalent property damage only crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Application of expert systems in project management decision aiding

    NASA Technical Reports Server (NTRS)

    Harris, Regina; Shaffer, Steven; Stokes, James; Goldstein, David

    1987-01-01

    The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method.

  7. Estimation of pyrethroid pesticide intake using regression ...

    EPA Pesticide Factsheets

    Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation of pesticide intakes for a defined demographic community, and (2) comparison of dietary pesticide intakes between the composite and individual samples. Extant databases were useful for assigning individual samples to composites, but they could not provide the breadth of information needed to facilitate measurable levels in every composite. Composite sample measurements were found to be good predictors of pyrethroid pesticide levels in their individual sample constituents where sufficient measurements are available above the method detection limit. Statistical inference shows little evidence of differences between individual and composite measurements and suggests that regression modeling of food groups based on composite dietary samples may provide an effective tool for estimating dietary pesticide intake for a defined population. The research presented in the journal article will improve community's ability to determine exposures through the dietary route with a less burdensome and costly method.

  8. Design of tyre force excitation for tyre-road friction estimation

    NASA Astrophysics Data System (ADS)

    Albinsson, Anton; Bruzelius, Fredrik; Jacobson, Bengt; Fredriksson, Jonas

    2017-02-01

    Knowledge of the current tyre-road friction coefficient is essential for future autonomous vehicles. The environmental conditions, and the tyre-road friction in particular, determine both the braking distance and the maximum cornering velocity and thus set the boundaries for the vehicle. Tyre-road friction is difficult to estimate during normal driving due to low levels of tyre force excitation. This problem can be solved by using active tyre force excitation. A torque is added to one or several wheels in the purpose of estimating the tyre-road friction coefficient. Active tyre force excitation provides the opportunity to design the tyre force excitation freely. This study investigates how the tyre force should be applied to minimise the error of the tyre-road friction estimate. The performance of different excitation strategies was found to be dependent on both tyre model choice and noise level. Furthermore, the advantage with using tyre models with more parameters decreased when noise was added to the force and slip ratio.

  9. Congenital heart surgery: surgical performance according to the Aristotle complexity score.

    PubMed

    Arenz, Claudia; Asfour, Boulos; Hraska, Viktor; Photiadis, Joachim; Haun, Christoph; Schindler, Ehrenfried; Sinzobahamvya, Nicodème

    2011-04-01

    Aristotle score methodology defines surgical performance as 'complexity score times hospital survival'. We analysed how this performance evolved over time and in correlation with case volume. Aristotle basic and comprehensive complexity scores and corresponding basic and comprehensive surgical performances were determined for primary (main) procedures carried out from 2006 to 2009. Surgical case volume performance described as unit performance was estimated as 'surgical performance times the number of primary procedures'. Basic and comprehensive complexity scores for the whole cohort of procedures (n=1828) were 7.74±2.66 and 9.89±3.91, respectively. With an early survival of 97.5% (1783/1828), mean basic and comprehensive surgical performances reached 7.54±2.54 and 9.64±3.81, respectively. Basic surgical performance varied little over the years: 7.46±2.48 in 2006, 7.43±2.58 in 2007, 7.50±2.76 in 2008 and 7.79±2.54 in 2009. Comprehensive surgical performance decreased from 9.56±3.91 (2006) to 9.22±3.94 (2007), and then to 9.13±3.77 (2008), thereafter increasing up to 10.62±3.67 (2009). No significant change of performance was observed for low comprehensive complexity levels 1-3. Variation concerned level 4 (p=0.048) which involved the majority of procedures (746, or 41% of cases) and level 6 (p<0.0001) which included a few cases (20, or 1%), whereas for level 5, statistical significance was almost attained: p=0.079. With a mean annual number of procedures of 457, mean basic and comprehensive unit performance was estimated at 3447±362 and 4405±577, respectively. Basic unit performance increased year to year from 3036 (2006, 100%) to 3254 (2007, 107.2%), then 3720 (2008, 122.5%), up to 3793 (2009, 124.9%). Comprehensive unit performance also increased: from 3891 (2006, 100%) to 4038 (2007, 103.8%), 4528 (2008, 116.4%) and 5172 (2009, 132.9%). Aristotle scoring of surgical performance allows quality assessment of surgical management of congenital heart disease over time. The newly defined unit performance appears to well reflect the trend of activity and efficiency of a congenital heart surgery department. Copyright © 2010 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  10. Determining Level of Service for Multilane Median Opening Zone

    NASA Astrophysics Data System (ADS)

    Ali, Paydar; Johnnie, Ben-Edigbe

    2017-08-01

    The road system is a capital-intensive investment, requiring thorough schematic framework and funding. Roads are built to provide an intrinsic quality of service which satisfies the road users. Roads that provide good services are expected to deliver operational performance that is consistent with their design specifications. Level of service and cumulative percentile speed distribution methods have been used in previous studies to estimate the quality of multilane highway service. Whilst the level of service approach relies on speed/flow curve, the cumulative percentile speed distribution is based solely speed. These estimation methods were used in studies carried out in Johor Malaysia. The aim of the studies is to ascertain the extent of speed reduction caused by midblock U-turn facilities as well as verify which estimation method is more reliable. At selected sites, road segments for both directional flows were divided into free-flow and midblock zones. Traffic volume, speed and vehicle type data for each zone were collected continuously for six weeks. Both estimation methods confirmed that speed reduction would be caused by midblock u-turn facilities. However level of service methods suggested that the quality of service would improve from level F to E or D at midblock zone in spite of speed reduction. Level of service was responding to traffic volume reduction at midblock u-turn facility not travel speed reduction. The studies concluded that since level of service was more responsive to traffic volume reduction than travel speed, it cannot be solely relied upon when assessing the quality of multilane highway service.

  11. Application of a transmission model to estimate performance objectives for Salmonella in the broiler supply chain.

    PubMed

    van der Fels-Klerx, H J; Tromp, S; Rijgersberg, H; van Asselt, E D

    2008-11-30

    The aim of the present study was to demonstrate how Performance Objectives (POs) for Salmonella at various points in the broiler supply chain can be estimated, starting from pre-set levels of the PO in finished products. The estimations were performed using an analytical transmission model, based on prevalence data collected throughout the chain in The Netherlands. In the baseline (current) situation, the end PO was set at 2.5% of the finished products (at end of processing) being contaminated with Salmonella. Scenario analyses were performed by reducing this baseline end PO to 1.5% and 0.5%. The results showed the end PO could be reduced by spreading the POs over the various stages of the broiler supply chain. Sensitivity analyses were performed by changing the values of the model parameters. Results indicated that, in general, decreasing Salmonella contamination between points in the chain is more effective in reducing the baseline PO than increasing the reduction of the pathogen, implying contamination should be prevented rather than treated. Application of both approaches at the same time showed to be most effective in reducing the end PO, especially at the abattoir and during processing. The modelling approach of this study proved to be useful to estimate the implications for preceding stages of the chain by setting a PO at the end of the chain as well as to evaluate the effectiveness of potential interventions in reducing the end PO. The model estimations may support policy-makers in their decision-making process with regard to microbiological food safety.

  12. Estimation of forest biomass using remote sensing

    NASA Astrophysics Data System (ADS)

    Sarker, Md. Latifur Rahman

    Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation

  13. Source inventory for Department of Energy solid low-level radioactive waste disposal facilities: What it means and how to get one of your own

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.A.

    1991-12-31

    In conducting a performance assessment for a low-level waste (LLW) disposal facility, one of the important considerations for determining the source term, which is defined as the amount of radioactivity being released from the facility, is the quantity of radioactive material present. This quantity, which will be referred to as the source inventory, is generally estimated through a review of historical records and waste tracking systems at the LLW facility. In theory, estimating the total source inventory for Department of Energy (DOE) LLW disposal facilities should be possible by reviewing the national data base maintained for LLW operations, the Solidmore » Waste Information Management System (SWIMS), or through the annual report that summarizes the SWIMS data, the Integrated Data Base (IDB) report. However, in practice, there are some difficulties in making this estimate. This is not unexpected, since the SWIMS and the IDB were not developed with the goal of developing a performance assessment source term in mind. The practical shortcomings using the existing data to develop a source term for DOE facilities will be discussed in this paper.« less

  14. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  15. Oxygen dynamics in photosynthetic membranes.

    NASA Astrophysics Data System (ADS)

    Savikhin, Sergei; Kihara, Shigeharu

    2008-03-01

    Production of oxygen by oxygenic photosynthetic organisms is expected to raise oxygen concentration within their photosynthetic membranes above normal aerobic values. These raised levels of oxygen may affect function of many proteins within photosynthetic cells. However, experiments on proteins in vitro are usually performed in aerobic (or anaerobic) conditions since the oxygen content of a membrane is not known. Using theory of diffusion and measured oxygen production rates we estimated the excess levels of oxygen in functioning photosynthetic cells. We show that for an individual photosynthetic cell suspended in water oxygen level is essentially the same as that for a non-photosynthetic sell. These data suggest that oxygen protection mechanisms may have evolved after the development of oxygenic photosynthesis in primitive bacteria and was driven by the overall rise of oxygen concentration in the atmosphere. Substantially higher levels of oxygen are estimated to occur in closely packed colonies of photosynthetic bacteria and in green leafs.

  16. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  17. Integrated mobile robot control

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Thorpe, Charles

    1991-01-01

    This paper describes the structure, implementation, and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation, path specification and tracking, human interfaces, fast communication, and multiple client support. The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Navlab autonomous vehicle. In addition, performance results from positioning and tracking systems are reported and analyzed.

  18. Maternal Docosahexaenoic Acid Intake Levels during Pregnancy and Infant Performance on a Novel Object Search Task at 22 Months

    ERIC Educational Resources Information Center

    Rees, Alison; Sirois, Sylvain; Wearden, Alison

    2014-01-01

    This study investigated maternal prenatal docosahexaenoic acid (DHA) intake and infant cognitive development at 22 months. Estimates for second- and third-trimester maternal DHA intake levels were obtained using a comprehensive Food Frequency Questionnaire. Infants (n = 67) were assessed at 22 months on a novel object search task. Mothers'…

  19. The Influence of Fundamental Frequency and Sound Pressure Level Range on Breathing Patterns in Female Classical Singing

    ERIC Educational Resources Information Center

    Collyer, Sally; Thorpe, C. William; Callaghan, Jean; Davis, Pamela J.

    2008-01-01

    Purpose: This study investigated the influence of fundamental frequency (F0) and sound pressure level (SPL) range on respiratory behavior in classical singing. Method: Five trained female singers performed an 8-s messa di voce (a crescendo and decrescendo on one F0) across their musical F0 range. Lung volume (LV) change was estimated, and…

  20. Integrated mobile robot control

    NASA Astrophysics Data System (ADS)

    Amidi, Omead; Thorpe, Chuck E.

    1991-03-01

    This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.

  1. A Comprehensive review of group level model performance in the presence of heteroscedasticity: Can a single model control Type I errors in the presence of outliers?

    PubMed Central

    Mumford, Jeanette A.

    2017-01-01

    Even after thorough preprocessing and a careful time series analysis of functional magnetic resonance imaging (fMRI) data, artifact and other issues can lead to violations of the assumption that the variance is constant across subjects in the group level model. This is especially concerning when modeling a continuous covariate at the group level, as the slope is easily biased by outliers. Various models have been proposed to deal with outliers including models that use the first level variance or that use the group level residual magnitude to differentially weight subjects. The most typically used robust regression, implementing a robust estimator of the regression slope, has been previously studied in the context of fMRI studies and was found to perform well in some scenarios, but a loss of Type I error control can occur for some outlier settings. A second type of robust regression using a heteroscedastic autocorrelation consistent (HAC) estimator, which produces robust slope and variance estimates has been shown to perform well, with better Type I error control, but with large sample sizes (500–1000 subjects). The Type I error control with smaller sample sizes has not been studied in this model and has not been compared to other modeling approaches that handle outliers such as FSL’s Flame 1 and FSL’s outlier de-weighting. Focusing on group level inference with a continuous covariate over a range of sample sizes and degree of heteroscedasticity, which can be driven either by the within- or between-subject variability, both styles of robust regression are compared to ordinary least squares (OLS), FSL’s Flame 1, Flame 1 with outlier de-weighting algorithm and Kendall’s Tau. Additionally, subject omission using the Cook’s Distance measure with OLS and nonparametric inference with the OLS statistic are studied. Pros and cons of these models as well as general strategies for detecting outliers in data and taking precaution to avoid inflated Type I error rates are discussed. PMID:28030782

  2. Estimation of potential scour at bridges on local government roads in South Dakota, 2009-12

    USGS Publications Warehouse

    Thompson, Ryan F.; Wattier, Chelsea M.; Liggett, Richard R.; Truax, Ryan A.

    2014-01-01

    In 2009, the U.S. Geological Survey and South Dakota Department of Transportation (SDDOT) began a study to estimate potential scour at selected bridges on local government (county, township, and municipal) roads in South Dakota. A rapid scour-estimation method (level-1.5) and a more detailed method (level-2) were used to develop estimates of contraction, abutment, and pier scour. Data from 41 level-2 analyses completed for this study were combined with data from level-2 analyses completed in previous studies to develop new South Dakota-specific regression equations: four regional equations for main-channel velocity at the bridge contraction to account for the widely varying stream conditions within South Dakota, and one equation for head change. Velocity data from streamgages also were used in the regression for average velocity through the bridge contraction. Using these new regression equations, scour analyses were completed using the level-1.5 method on 361 bridges on local government roads. Typically, level-1.5 analyses are completed at flows estimated to have annual exceedance probabilities of 1 percent (100-year flood) and 0.2 percent (500-year flood); however, at some sites the bridge would not pass these flows. A level-1.5 analysis was then completed at the flow expected to produce the maximum scour. Data presented for level-1.5 scour analyses at the 361 bridges include contraction, abutment, and pier scour. Estimates of potential contraction scour ranged from 0 to 32.5 feet for the various flows evaluated. Estimated potential abutment scour ranged from 0 to 40.9 feet for left abutments, and from 0 to 37.7 feet for right abutments. Pier scour values ranged from 2.7 to 31.6 feet. The scour depth estimates provided in this report can be used by the SDDOT to compare with foundation depths at each bridge to determine if abutments or piers are at risk of being undermined by scour at the flows evaluated. Replicate analyses were completed at 24 of the 361 bridges to provide quality-assurance/quality-control measures for the level-1.5 scour estimates. An attempt was made to use the same flows among replicate analyses. Scour estimates do not necessarily have to be in numerical agreement to give the same results. For example, if contraction scour replicate analyses are 18.8 and 30.8 feet, both scour depths can indicate susceptibility to scour for which countermeasures may be needed, even though one number is much greater than the other number. Contraction scour has perhaps the greatest potential for being estimated differently in replicate visits. For contraction scour estimates at the various flows analyzed, differences between results ranged from -7.8 to 5.5 feet, with a median difference of 0.4 foot and an average difference of 0.2 foot. Abutment scour appeared to be nearly as reproducible as contraction scour. For abutment scour estimates at the varying flows analyzed, differences between results ranged from -17.4 to 11 feet, with a median difference of 1.4 feet and an average difference of 1.7 feet. Estimates of pier scour tended to be the most consistently reproduced in replicate visits, with differences between results ranging from -0.3 to 0.5 foot, with a median difference of 0.0 foot and an average difference of 0.0 foot. The U.S. Army Corps of Engineers Hydraulics Engineering Center River Analysis Systems (HEC-RAS) software package was used to model stream hydraulics at the 41 sites with level-2 analyses. Level-1.5 analyses also were completed at these sites, and the performance of the level-1.5 method was assessed by comparing results to those from the more rigorous level-2 method. The envelope curve approach used in the level-1.5 method is designed to overestimate scour relative to the estimate from the level-2 scour analysis. In cases where the level-1.5 method estimated less scour than the level-2 method, the amount of underestimation generally was less than 3 feet. The level-1.5 method generally overestimated contraction, abutment, and pier scour relative to the level-2 method, as intended. Although the level-1.5 method is designed to overestimate scour relative to more involved analysis methods, many assumptions, uncertainties, and estimations are involved. If the envelope curves are adjusted such that the level-1.5 method never underestimates scour relative to the level-2 method, an accompanying result may be excessive overestimation.

  3. Inverse sampling regression for pooled data.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Eskridge, Kent; Crossa, José

    2017-06-01

    Because pools are tested instead of individuals in group testing, this technique is helpful for estimating prevalence in a population or for classifying a large number of individuals into two groups at a low cost. For this reason, group testing is a well-known means of saving costs and producing precise estimates. In this paper, we developed a mixed-effect group testing regression that is useful when the data-collecting process is performed using inverse sampling. This model allows including covariate information at the individual level to incorporate heterogeneity among individuals and identify which covariates are associated with positive individuals. We present an approach to fit this model using maximum likelihood and we performed a simulation study to evaluate the quality of the estimates. Based on the simulation study, we found that the proposed regression method for inverse sampling with group testing produces parameter estimates with low bias when the pre-specified number of positive pools (r) to stop the sampling process is at least 10 and the number of clusters in the sample is also at least 10. We performed an application with real data and we provide an NLMIXED code that researchers can use to implement this method.

  4. Quality of education and memory test performance in older men: the New York University Paragraph Recall Test normative data.

    PubMed

    Mathews, Melissa; Abner, Erin; Caban-Holt, Allison; Dennis, Brandon C; Kryscio, Richard; Schmitt, Frederick

    2013-09-01

    Memory evaluation is a key component in the accurate diagnosis of cognitive disorders.One memory procedure that has shown promise in discriminating disease-related cognitive decline from normal cognitive aging is the New York University Paragraph Recall Test; however, the effects of education have been unexamined as they pertain to one's literacy level. The current study provides normative data stratified by estimated quality of education as indexed by irregular word reading skill. Conventional norms were derived from a sample (N = 385) of cognitively intact elderly men who were initially recruited for participation in the PREADViSE clinical trial. A series of multiple linear regression models were constructed to assess the influence of demographic variables on mean NYU Paragraph Immediate and Delayed Recall scores. Test version, assessment site, and estimated quality of education were significant predictors of performance on the NYU Paragraph Recall Test. Findings indicate that estimated quality of education is a better predictor of memory performance than ethnicity and years of total education. Normative data stratified according to estimated quality of education are presented. The current study provides evidence and support for normativedata stratified by quality of education as opposed to years of education.

  5. Performance characteristics and estimation of measurement uncertainty of three plating procedures for Campylobacter enumeration in chicken meat.

    PubMed

    Habib, I; Sampers, I; Uyttendaele, M; Berkvens, D; De Zutter, L

    2008-02-01

    In this work, we present an intra-laboratory study in order to estimate repeatability (r), reproducibility (R), and measurement uncertainty (U) associated with three media for Campylobacter enumeration, named, modified charcoal cefoperazone deoxycholate agar (mCCDA); Karmali agar; and CampyFood ID agar (CFA) a medium by Biomérieux SA. The study was performed at three levels: (1) pure bacterial cultures, using three Campylobacter strains; (2) artificially contaminated samples from three chicken meat matrixes (total n=30), whereby samples were spiked using two contamination levels; ca. 10(3)cfuCampylobacter/g, and ca. 10(4)cfuCampylobacter/g; and (3) pilot testing in naturally contaminated chicken meat samples (n=20). Results from pure culture experiment revealed that enumeration of Campylobacter colonies on Karmali and CFA media was more convenient in comparison with mCCDA using spread and spiral plating techniques. Based on artificially contaminated samples testing, values of repeatability (r) were comparable between the three media, and estimated as 0.15log(10)cfu/g for mCCDA, 0.14log(10)cfu/g for Karmali, and 0.18log(10)cfu/g for CFA. As well, reproducibility performance of the three plating media was comparable. General R values which can be used when testing chicken meat samples are; 0.28log(10), 0.32log(10), and 0.25log(10) for plating on mCCDA, Karmali agar, and CFA, respectively. Measurement uncertainty associated with mCCDA, Karmali agar, and CFA using spread plating, for combination of all meat matrixes, were +/-0.24log(10)cfu/g, +/-0.28log(10)cfu/g, and +/-0.22log(10)cfu/g, respectively. Higher uncertainty was associated with Karmali agar for Campylobacter enumeration in artificially inoculated minced meat (+/-0.48log(10)cfu/g). The general performance of CFA medium was comparable with mCCDA performance at the level of artificially contaminated samples. However, when tested at naturally contaminated samples, non-Campylobacter colonies gave similar deep red colour as that given by the typical Campylobacter growth on CFA. Such colonies were not easily distinguishable by naked eye. In general, the overall reproducibility, repeatability, and measurement uncertainty estimated by our study indicate that there are no major problems with the precision of the International Organization for Standardization (ISO) 10272-2:2006 protocol for Campylobacter enumeration using mCCDA medium.

  6. Handling properties of diverse automobiles and correlation with full scale response data. [driver/vehicle response to aerodynamic disturbances

    NASA Technical Reports Server (NTRS)

    Hoh, R. H.; Weir, D. H.

    1973-01-01

    Driver/vehicle response and performance of a variety of vehicles in the presence of aerodynamic disturbances are discussed. Steering control is emphasized. The vehicles include full size station wagon, sedan, compact sedan, van, pickup truck/camper, and wagon towing trailer. Driver/vehicle analyses are used to estimate response and performance. These estimates are correlated with full scale data with test drivers and the results are used to refine the driver/vehicle models, control structure, and loop closure criteria. The analyses and data indicate that the driver adjusts his steering control properties (when he can) to achieve roughly the same level of performance despite vehicle variations. For the more disturbance susceptible vehicles, such as the van, the driver tightens up his control. Other vehicles have handling dynamics which cause him to loosen his control response, even though performance degrades.

  7. The association of very-low-density lipoprotein with ankle-brachial index in peritoneal dialysis patients with controlled serum low-density lipoprotein cholesterol level

    PubMed Central

    2013-01-01

    Background Peripheral artery disease (PAD) represents atherosclerotic disease and is a risk factor for death in peritoneal dialysis (PD) patients, who tend to show an atherogenic lipid profile. In this study, we investigated the relationship between lipid profile and ankle-brachial index (ABI) as an index of atherosclerosis in PD patients with controlled serum low-density lipoprotein (LDL) cholesterol level. Methods Thirty-five PD patients, whose serum LDL cholesterol level was controlled at less than 120mg/dl, were enrolled in this cross-sectional study in Japan. The proportions of cholesterol level to total cholesterol level (cholesterol proportion) in 20 lipoprotein fractions and the mean size of lipoprotein particles were measured using an improved method, namely, high-performance gel permeation chromatography. Multivariate linear regression analysis was adjusted for diabetes mellitus and cardiovascular and/or cerebrovascular diseases. Results The mean (standard deviation) age was 61.6 (10.5) years; PD vintage, 38.5 (28.1) months; ABI, 1.07 (0.22). A low ABI (0.9 or lower) was observed in 7 patients (low-ABI group). The low-ABI group showed significantly higher cholesterol proportions in the chylomicron fraction and large very-low-density lipoproteins (VLDLs) (Fractions 3–5) than the high-ABI group (ABI>0.9). Adjusted multivariate linear regression analysis showed that ABI was negatively associated with serum VLDL cholesterol level (parameter estimate=-0.00566, p=0.0074); the cholesterol proportions in large VLDLs (Fraction 4, parameter estimate=-3.82, p=0.038; Fraction 5, parameter estimate=-3.62, p=0.0039) and medium VLDL (Fraction 6, parameter estimate=-3.25, p=0.014); and the size of VLDL particles (parameter estimate=-0.0352, p=0.032). Conclusions This study showed that the characteristics of VLDL particles were associated with ABI among PD patients. Lowering serum VLDL level may be an effective therapy against atherosclerosis in PD patients after the control of serum LDL cholesterol level. PMID:24093487

  8. Estimating Power System Dynamic States Using Extended Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw

    2014-10-31

    Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less

  9. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  10. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  11. Jet noise suppressor nozzle development for augmentor wing jet STOL research aircraft (C-8A Buffalo)

    NASA Technical Reports Server (NTRS)

    Harkonen, D. L.; Marks, C. C.; Okeefe, J. V.

    1974-01-01

    Noise and performance test results are presented for a full-scale advanced design rectangular array lobe jet suppressor nozzle (plain wall and corrugated). Flight design and installation considerations are also discussed. Noise data are presented in terms of peak PNLT (perceived noise level, tone corrected) suppression relative to the existing airplane and one-third octave-band spectra. Nozzle performance is presented in terms of velocity coefficient. Estimates of the hot thrust available during emergency (engine out) with the suppressor nozzle installed are compared with the current thrust levels produced by the round convergent nozzles.

  12. Family stress and adolescents' cognitive functioning: sleep as a protective factor.

    PubMed

    El-Sheikh, Mona; Tu, Kelly M; Erath, Stephen A; Buckhalt, Joseph A

    2014-12-01

    We examined 2 sleep-wake parameters as moderators of the associations between exposure to family stressors and adolescent cognitive functioning. Participants were 252 school-recruited adolescents (M = 15.79 years; 66% European American, 34% African American). Youths reported on 3 dimensions of family stress: marital conflict, harsh parenting, and parental psychological control. Cognitive functioning was indexed through performance on the Woodcock-Johnson III Tests of Cognitive Abilities. Sleep minutes and efficiency were measured objectively using actigraphy. Toward identifying unique effects, path models controlled for 2 family stress variables while estimating the third. Analyses revealed that sleep efficiency moderated the associations between negative parenting (harsh parenting and parental psychological control) and adolescents' cognitive functioning. The highest level of cognitive performance was predicted for adolescents with higher levels of sleep efficiency in conjunction with lower levels of either harsh parenting or psychological control. The effects of sleep were more pronounced at lower levels of negative parenting, in which adolescents with higher sleep efficiency performed better than their counterparts with poorer sleep. At higher levels of either harsh parenting or psychological control, similar levels of cognitive performance were observed regardless of sleep. Results are discussed in comparison with other recent studies on interrelations among family stress, sleep, and cognitive performance in childhood and adolescence.

  13. Family Stress and Adolescents’ Cognitive Functioning: Sleep as a Protective Factor

    PubMed Central

    El-Sheikh, Mona; Tu, Kelly M.; Erath, Stephen A.; Buckhalt, Joseph A.

    2014-01-01

    We examined two sleep-wake parameters as moderators of the associations between exposure to family stressors and adolescent cognitive functioning. Participants were 252 school-recruited adolescents (M = 15.79 years; 66% European American, 34% African American). Youths reported on three dimensions of family stress: marital conflict, harsh parenting, and parental psychological control. Cognitive functioning was indexed through performance on the Woodcock-Johnson III Tests of Cognitive Abilities. Sleep minutes and efficiency were measured objectively using actigraphy. Towards identifying unique effects, path models controlled for two family stress variables while estimating the third. Analyses revealed that sleep efficiency moderated the associations between negative parenting (harsh parenting and parental psychological control) and adolescents’ cognitive functioning. The highest level of cognitive performance was predicted for adolescents with higher levels of sleep efficiency in conjunction with lower levels of either harsh parenting or psychological control. The effects of sleep were more pronounced at lower levels of negative parenting where adolescents with higher sleep efficiency performed better than their counterparts with poorer sleep. At higher levels of either harsh parenting or psychological control, similar levels of cognitive performance were observed regardless of sleep. Results are discussed in comparison to other recent studies on interrelations among family stress, sleep, and cognitive performance in childhood and adolescence. PMID:25329625

  14. Estimation of Chlorophyll-a Concentration and the Trophic State of the Barra Bonita Hydroelectric Reservoir Using OLI/Landsat-8 Images

    PubMed Central

    Watanabe, Fernanda Sayuri Yoshino; Alcântara, Enner; Rodrigues, Thanan Walesza Pequeno; Imai, Nilton Nobuhiro; Barbosa, Cláudio Clemente Faria; Rotta, Luiz Henrique da Silva

    2015-01-01

    Reservoirs are artificial environments built by humans, and the impacts of these environments are not completely known. Retention time and high nutrient availability in the water increases the eutrophic level. Eutrophication is directly correlated to primary productivity by phytoplankton. These organisms have an important role in the environment. However, high concentrations of determined species can lead to public health problems. Species of cyanobacteria produce toxins that in determined concentrations can cause serious diseases in the liver and nervous system, which could lead to death. Phytoplankton has photoactive pigments that can be used to identify these toxins. Thus, remote sensing data is a viable alternative for mapping these pigments, and consequently, the trophic. Chlorophyll-a (Chl-a) is present in all phytoplankton species. Therefore, the aim of this work was to evaluate the performance of images of the sensor Operational Land Imager (OLI) onboard the Landsat-8 satellite in determining Chl-a concentrations and estimating the trophic level in a tropical reservoir. Empirical models were fitted using data from two field surveys conducted in May and October 2014 (Austral Autumn and Austral Spring, respectively). Models were applied in a temporal series of OLI images from May 2013 to October 2014. The estimated Chl-a concentration was used to classify the trophic level from a trophic state index that adopted the concentration of this pigment-like parameter. The models of Chl-a concentration showed reasonable results, but their performance was likely impaired by the atmospheric correction. Consequently, the trophic level classification also did not obtain better results. PMID:26322489

  15. Soils Activity Mobility Study: Methodology and Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2014-09-29

    This report presents a three-level approach for estimation of sediment transport to provide an assessment of potential erosion risk for sites at the Nevada National Security Site (NNSS) that are posted for radiological purposes and where migration is suspected or known to occur due to storm runoff. Based on the assessed risk, the appropriate level of effort can be determined for analysis of radiological surveys, field experiments to quantify erosion and transport rates, and long-term monitoring. The method is demonstrated at contaminated sites, including Plutonium Valley, Shasta, Smoky, and T-1. The Pacific Southwest Interagency Committee (PSIAC) procedure is selected asmore » the Level 1 analysis tool. The PSIAC method provides an estimation of the total annual sediment yield based on factors derived from the climatic and physical characteristics of a watershed. If the results indicate low risk, then further analysis is not warranted. If the Level 1 analysis indicates high risk or is deemed uncertain, a Level 2 analysis using the Modified Universal Soil Loss Equation (MUSLE) is proposed. In addition, if a sediment yield for a storm event rather than an annual sediment yield is needed, then the proposed Level 2 analysis should be performed. MUSLE only provides sheet and rill erosion estimates. The U.S. Army Corps of Engineers Hydrologic Engineering Center-Hydrologic Modeling System (HEC-HMS) provides storm peak runoff rate and storm volumes, the inputs necessary for MUSLE. Channel Sediment Transport (CHAN-SED) I and II models are proposed for estimating sediment deposition or erosion in a channel reach from a storm event. These models require storm hydrograph associated sediment concentration and bed load particle size distribution data. When the Level 2 analysis indicates high risk for sediment yield and associated contaminant migration or when there is high uncertainty in the Level 2 results, the sites can be further evaluated with a Level 3 analysis using more complex and labor- and data-intensive methods. For the watersheds analyzed in this report using the Level 1 PSIAC method, the risk of erosion is low. The field reconnaissance surveys of these watersheds confirm the conclusion that the sediment yield of undisturbed areas at the NNSS would be low. The climate, geology, soils, ground cover, land use, and runoff potential are similar among these watersheds. There are no well-defined ephemeral channels except at the Smoky and Plutonium Valley sites. Topography seems to have the strongest influence on sediment yields, as sediment yields are higher on the steeper hill slopes. Lack of measured sediment yield data at the NNSS does not allow for a direct evaluation of the yield estimates by the PSIAC method. Level 2 MUSLE estimates in all the analyzed watersheds except Shasta are a small percentage of the estimates from PSIAC because MUSLE is not inclusive of channel erosion. This indicates that channel erosion dominates the total sediment yield in these watersheds. Annual sediment yields for these watersheds are estimated using the CHAN-SEDI and CHAN-SEDII channel sediment transport models. Both transport models give similar results and exceed the estimates obtained from PSIAC and MUSLE. It is recommended that the total watershed sediment yield of watersheds at the NNSS with flow channels be obtained by adding the washload estimate (rill and inter-rill erosion) from MUSLE to that obtained from channel transport models (bed load and suspended sediment). PSIAC will give comparable results if factor scores for channel erosion are revised towards the high erosion level. Application of the Level 3 process-based models to estimate sediment yields at the NNSS cannot be recommended at this time. Increased model complexity alone will not improve the certainty of the sediment yield estimates. Models must be calibrated against measured data before model results are accepted as certain. Because no measurements of sediment yields at the NNSS are available, model validation cannot be performed. This is also true for the models used in the Level 2 analyses presented in this study. The need to calibrate MUSLE to local conditions has been discussed. Likewise, the transport equations of CHAN-SEDI and CHAN-SEDII need to be calibrated against local data to assess their applicability under semi-arid conditions and for the ephemeral channels at the NNSS. Before these validations and calibration exercises can be undertaken, a long-term measured sediment yield data set must be developed. Development of long-term measured sediment yield data cannot be overemphasized. Long-term monitoring is essential for accurate characterization of watershed processes. It is recommended that a long-term monitoring program be set up to measure watershed erosion rates and channel sediment transport rates.« less

  16. Automated Transition State Theory Calculations for High-Throughput Kinetics.

    PubMed

    Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H

    2017-09-21

    A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.

  17. Socioeconomic Determinants of Antibiotic Consumption in the State of São Paulo, Brazil: The Effect of Restricting Over-The-Counter Sales.

    PubMed

    Kliemann, Breno S; Levin, Anna S; Moura, M Luísa; Boszczowski, Icaro; Lewis, James J

    2016-01-01

    Improper antibiotic use is one of the main drivers of bacterial resistance to antibiotics, increasing infectious diseases morbidity and mortality and raising costs of healthcare. The level of antibiotic consumption has been shown to vary according to socioeconomic determinants (SED) such as income and access to education. In many Latin American countries, antibiotics could be easily purchased without a medical prescription in private pharmacies before enforcement of restrictions on over-the-counter (OTC) sales in recent years. Brazil issued a law abolishing OTC sales in October 2010. This study seeks to find SED of antibiotic consumption in the Brazilian state of São Paulo (SSP) and to estimate the impact of the 2010 law. Data on all oral antibiotic sales having occurred in the private sector in SSP from 2008 to 2012 were pooled into the 645 municipalities of SSP. Linear regression was performed to estimate consumption levels that would have occurred in 2011 and 2012 if no law regulating OTC sales had been issued in 2010. These values were compared to actual observed levels, estimating the effect of this law. Linear regression was performed to find association of antibiotic consumption levels and of a greater effect of the law with municipality level data on SED obtained from a nationwide census. Oral antibiotic consumption in SSP rose from 8.44 defined daily doses per 1,000 inhabitants per day (DID) in 2008 to 9.95 in 2010, and fell to 8.06 DID in 2012. Determinants of a higher consumption were higher human development index, percentage of urban population, density of private health establishments, life expectancy and percentage of females; lower illiteracy levels and lower percentage of population between 5 and 15 years old. A higher percentage of females was associated with a stronger effect of the law. SSP had similar antibiotic consumption levels as the whole country of Brazil, and they were effectively reduced by the policy.

  18. Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels

    NASA Technical Reports Server (NTRS)

    Moher, Michael L.; Lodge, John H.

    1990-01-01

    A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.

  19. Population pharmacokinetic characterization of BAY 81-8973, a full-length recombinant factor VIII: lessons learned - importance of including samples with factor VIII levels below the quantitation limit.

    PubMed

    Garmann, D; McLeay, S; Shah, A; Vis, P; Maas Enriquez, M; Ploeger, B A

    2017-07-01

    The pharmacokinetics (PK), safety and efficacy of BAY 81-8973, a full-length, unmodified, recombinant human factor VIII (FVIII), were evaluated in the LEOPOLD trials. The aim of this study was to develop a population PK model based on pooled data from the LEOPOLD trials and to investigate the importance of including samples with FVIII levels below the limit of quantitation (BLQ) to estimate half-life. The analysis included 1535 PK observations (measured by the chromogenic assay) from 183 male patients with haemophilia A aged 1-61 years from the 3 LEOPOLD trials. The limit of quantitation was 1.5 IU dL -1 for the majority of samples. Population PK models that included or excluded BLQ samples were used for FVIII half-life estimations, and simulations were performed using both estimates to explore the influence on the time below a determined FVIII threshold. In the data set used, approximately 16.5% of samples were BLQ, which is not uncommon for FVIII PK data sets. The structural model to describe the PK of BAY 81-8973 was a two-compartment model similar to that seen for other FVIII products. If BLQ samples were excluded from the model, FVIII half-life estimations were longer compared with a model that included BLQ samples. It is essential to assess the importance of BLQ samples when performing population PK estimates of half-life for any FVIII product. Exclusion of BLQ data from half-life estimations based on population PK models may result in an overestimation of half-life and underestimation of time under a predetermined FVIII threshold, resulting in potential underdosing of patients. © 2017 Bayer AG. Haemophilia Published by John Wiley & Sons Ltd.

  20. Maternal docosahexaenoic acid intake levels during pregnancy and infant performance on a novel object search task at 22 months.

    PubMed

    Rees, Alison; Sirois, Sylvain; Wearden, Alison

    2014-01-01

    This study investigated maternal prenatal docosahexaenoic acid (DHA) intake and infant cognitive development at 22 months. Estimates for second- and third-trimester maternal DHA intake levels were obtained using a comprehensive Food Frequency Questionnaire. Infants (n = 67) were assessed at 22 months on a novel object search task. Mothers' DHA intake levels were divided into high or low groups, with analyses revealing a significant positive effect of third-trimester DHA on object search task performance. The third trimester appears to be a critical time for ensuring adequate maternal DHA levels to facilitate optimum cognitive development in late infancy. © 2014 The Authors. Child Development published by Wiley Periodicals, Inc. on behalf of Society for Research in Child Development.

  1. Visual scanning behavior and pilot workload

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.

    1981-01-01

    An experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary as a function of the level of difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased as a function of the estimated skill level of the pilots, with novices being affected by the loading task much more than the experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.

  2. A quantitative link between face discrimination deficits and neuronal selectivity for faces in autism☆

    PubMed Central

    Jiang, Xiong; Bollich, Angela; Cox, Patrick; Hyder, Eric; James, Joette; Gowani, Saqib Ali; Hadjikhani, Nouchine; Blanz, Volker; Manoach, Dara S.; Barton, Jason J.S.; Gaillard, William D.; Riesenhuber, Maximilian

    2013-01-01

    Individuals with Autism Spectrum Disorder (ASD) appear to show a general face discrimination deficit across a range of tasks including social–emotional judgments as well as identification and discrimination. However, functional magnetic resonance imaging (fMRI) studies probing the neural bases of these behavioral differences have produced conflicting results: while some studies have reported reduced or no activity to faces in ASD in the Fusiform Face Area (FFA), a key region in human face processing, others have suggested more typical activation levels, possibly reflecting limitations of conventional fMRI techniques to characterize neuron-level processing. Here, we test the hypotheses that face discrimination abilities are highly heterogeneous in ASD and are mediated by FFA neurons, with differences in face discrimination abilities being quantitatively linked to variations in the estimated selectivity of face neurons in the FFA. Behavioral results revealed a wide distribution of face discrimination performance in ASD, ranging from typical performance to chance level performance. Despite this heterogeneity in perceptual abilities, individual face discrimination performance was well predicted by neural selectivity to faces in the FFA, estimated via both a novel analysis of local voxel-wise correlations, and the more commonly used fMRI rapid adaptation technique. Thus, face processing in ASD appears to rely on the FFA as in typical individuals, differing quantitatively but not qualitatively. These results for the first time mechanistically link variations in the ASD phenotype to specific differences in the typical face processing circuit, identifying promising targets for interventions. PMID:24179786

  3. Simple prognostic model for patients with advanced cancer based on performance status.

    PubMed

    Jang, Raymond W; Caraiscos, Valerie B; Swami, Nadia; Banerjee, Subrata; Mak, Ernie; Kaya, Ebru; Rodin, Gary; Bryson, John; Ridley, Julia Z; Le, Lisa W; Zimmermann, Camilla

    2014-09-01

    Providing survival estimates is important for decision making in oncology care. The purpose of this study was to provide survival estimates for outpatients with advanced cancer, using the Eastern Cooperative Oncology Group (ECOG), Palliative Performance Scale (PPS), and Karnofsky Performance Status (KPS) scales, and to compare their ability to predict survival. ECOG, PPS, and KPS were completed by physicians for each new patient attending the Princess Margaret Cancer Centre outpatient Oncology Palliative Care Clinic (OPCC) from April 2007 to February 2010. Survival analysis was performed using the Kaplan-Meier method. The log-rank test for trend was employed to test for differences in survival curves for each level of performance status (PS), and the concordance index (C-statistic) was used to test the predictive discriminatory ability of each PS measure. Measures were completed for 1,655 patients. PS delineated survival well for all three scales according to the log-rank test for trend (P < .001). Survival was approximately halved for each worsening performance level. Median survival times, in days, for each ECOG level were: EGOG 0, 293; ECOG 1, 197; ECOG 2, 104; ECOG 3, 55; and ECOG 4, 25.5. Median survival times, in days, for PPS (and KPS) were: PPS/KPS 80-100, 221 (215); PPS/KPS 60 to 70, 115 (119); PPS/KPS 40 to 50, 51 (49); PPS/KPS 10 to 30, 22 (29). The C-statistic was similar for all three scales and ranged from 0.63 to 0.64. We present a simple tool that uses PS alone to prognosticate in advanced cancer, and has similar discriminatory ability to more complex models. Copyright © 2014 by American Society of Clinical Oncology.

  4. Computer-Aided TRIZ Ideality and Level of Invention Estimation Using Natural Language Processing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Adams, Christopher; Tate, Derrick

    Patent textual descriptions provide a wealth of information that can be used to understand the underlying design approaches that result in the generation of novel and innovative technology. This article will discuss a new approach for estimating Degree of Ideality and Level of Invention metrics from the theory of inventive problem solving (TRIZ) using patent textual information. Patent text includes information that can be used to model both the functions performed by a design and the associated costs and problems that affect a design’s value. The motivation of this research is to use patent data with calculation of TRIZ metrics to help designers understand which combinations of system components and functions result in creative and innovative design solutions. This article will discuss in detail methods to estimate these TRIZ metrics using natural language processing and machine learning with the use of neural networks.

  5. Using the Detectability Index to Predict P300 Speller Performance

    PubMed Central

    Mainsah, B.O.; Collins, L.M.; Throckmorton, C.S.

    2017-01-01

    Objective The P300 speller is a popular brain-computer interface (BCI) system that has been investigated as a potential communication alternative for individuals with severe neuromuscular limitations. To achieve acceptable accuracy levels for communication, the system requires repeated data measurements in a given signal condition to enhance the signal-to-noise ratio of elicited brain responses. These elicited brain responses, which are used as control signals, are embedded in noisy electroencephalography (EEG) data. The discriminability between target and non-target EEG responses defines a user’s performance with the system. A previous P300 speller model has been proposed to estimate system accuracy given a certain amount of data collection. However, the approach was limited to a static stopping algorithm, i.e. averaging over a fixed number of measurements, and the row-column paradigm. A generalized method that is also applicable to dynamic stopping algorithms and other stimulus paradigms is desirable. Approach We developed a new probabilistic model-based approach to predicting BCI performance, where performance functions can be derived analytically or via Monte Carlo methods. Within this framework, we introduce a new model for the P300 speller with the Bayesian dynamic stopping (DS) algorithm, by simplifying a multi-hypothesis to a binary hypothesis problem using the likelihood ratio test. Under a normality assumption, the performance functions for the Bayesian algorithm can be parameterized with the detectability index, a measure which quantifies the discriminability between target and non-target EEG responses. Main results Simulations with synthetic and empirical data provided initial verification of the proposed method of estimating performance with Bayesian DS using the detectability index. Analysis of results from previous online studies validated the proposed method. Significance The proposed method could serve as a useful tool to initially asses BCI performance without extensive online testing, in order to estimate the amount of data required to achieve a desired accuracy level. PMID:27705956

  6. Using the detectability index to predict P300 speller performance

    NASA Astrophysics Data System (ADS)

    Mainsah, B. O.; Collins, L. M.; Throckmorton, C. S.

    2016-12-01

    Objective. The P300 speller is a popular brain-computer interface (BCI) system that has been investigated as a potential communication alternative for individuals with severe neuromuscular limitations. To achieve acceptable accuracy levels for communication, the system requires repeated data measurements in a given signal condition to enhance the signal-to-noise ratio of elicited brain responses. These elicited brain responses, which are used as control signals, are embedded in noisy electroencephalography (EEG) data. The discriminability between target and non-target EEG responses defines a user’s performance with the system. A previous P300 speller model has been proposed to estimate system accuracy given a certain amount of data collection. However, the approach was limited to a static stopping algorithm, i.e. averaging over a fixed number of measurements, and the row-column paradigm. A generalized method that is also applicable to dynamic stopping (DS) algorithms and other stimulus paradigms is desirable. Approach. We developed a new probabilistic model-based approach to predicting BCI performance, where performance functions can be derived analytically or via Monte Carlo methods. Within this framework, we introduce a new model for the P300 speller with the Bayesian DS algorithm, by simplifying a multi-hypothesis to a binary hypothesis problem using the likelihood ratio test. Under a normality assumption, the performance functions for the Bayesian algorithm can be parameterized with the detectability index, a measure which quantifies the discriminability between target and non-target EEG responses. Main results. Simulations with synthetic and empirical data provided initial verification of the proposed method of estimating performance with Bayesian DS using the detectability index. Analysis of results from previous online studies validated the proposed method. Significance. The proposed method could serve as a useful tool to initially assess BCI performance without extensive online testing, in order to estimate the amount of data required to achieve a desired accuracy level.

  7. Development and Release of a GRACE-FO "Grand Simulation" Data Set by JPL

    NASA Astrophysics Data System (ADS)

    Fahnestock, E.; Yuan, D. N.; Wiese, D. N.; McCullough, C. M.; Harvey, N.; Sakumura, C.; Paik, M.; Bertiger, W. I.; Wen, H. Y.; Kruizinga, G. L. H.

    2017-12-01

    The GRACE-FO mission, to be launched early in 2018, will require several stages of data processing to be performed within its Science Data System (SDS). In an effort to demonstrate effective implementation and inter-operation of this level 1, 2, and 3 data processing, and to verify its combined ability to recover a truth Earth gravity field to within top-level requirements, the SDS team has performed a system test which it has termed the "Grand Simulation". This process starts with iteration to converge on a mutually consistent integrated truth orbit, non-gravitational acceleration time history, and spacecraft attitude time history, generated with the truth models for all elements of the integrated system (geopotential, both GRACE-FO spacecraft, constellation of GPS spacecraft, etc.). Level 1A data products are generated and then the GPS time to onboard receiver time clock error is introduced into those products according to a realistic truth clock offset model. The various data products are noised according to current best estimate noise models, and then some are used within a precision orbit determination and clock offset estimation/recovery process. Processing from level 1A to level 1B data products uses the recovered clock offset to correct back to GPS time, and performs gap-filling, compression, etc. This exercises nearly all software pathways intended for processing actual GRACE-FO science data. Finally, a monthly gravity field is recovered and compared against the truth background field. In this talk we briefly summarize the resulting performance vs. requirements, and lessons learned in the system test process. Finally, we provide information for use of the level 1B data set by the general community for gravity solution studies and software trials in anticipation of operational GRACE-FO data. ©2016 California Institute of Technology. Government sponsorship acknowledged.

  8. Global DNA hypomethylation in peripheral blood leukocytes as a biomarker for cancer risk: a meta-analysis.

    PubMed

    Woo, Hae Dong; Kim, Jeongseon

    2012-01-01

    Good biomarkers for early detection of cancer lead to better prognosis. However, harvesting tumor tissue is invasive and cannot be routinely performed. Global DNA methylation of peripheral blood leukocyte DNA was evaluated as a biomarker for cancer risk. We performed a meta-analysis to estimate overall cancer risk according to global DNA hypomethylation levels among studies with various cancer types and analytical methods used to measure DNA methylation. Studies were systemically searched via PubMed with no language limitation up to July 2011. Summary estimates were calculated using a fixed effects model. The subgroup analyses by experimental methods to determine DNA methylation level were performed due to heterogeneity within the selected studies (p<0.001, I(2): 80%). Heterogeneity was not found in the subgroup of %5-mC (p = 0.393, I(2): 0%) and LINE-1 used same target sequence (p = 0.097, I(2): 49%), whereas considerable variance remained in LINE-1 (p<0.001, I(2): 80%) and bladder cancer studies (p = 0.016, I(2): 76%). These results suggest that experimental methods used to quantify global DNA methylation levels are important factors in the association study between hypomethylation levels and cancer risk. Overall, cancer risks of the group with the lowest DNA methylation levels were significantly higher compared to the group with the highest methylation levels [OR (95% CI): 1.48 (1.28-1.70)]. Global DNA hypomethylation in peripheral blood leukocytes may be a suitable biomarker for cancer risk. However, the association between global DNA methylation and cancer risk may be different based on experimental methods, and region of DNA targeted for measuring global hypomethylation levels as well as the cancer type. Therefore, it is important to select a precise and accurate surrogate marker for global DNA methylation levels in the association studies between global DNA methylation levels in peripheral leukocyte and cancer risk.

  9. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  10. Towards an Operational Definition of Clinical Competency in Pharmacy

    PubMed Central

    2015-01-01

    Objective. To estimate the inter-rater reliability and accuracy of ratings of competence in student pharmacist/patient clinical interactions as depicted in videotaped simulations and to compare expert panelist and typical preceptor ratings of those interactions. Methods. This study used a multifactorial experimental design to estimate inter-rater reliability and accuracy of preceptors’ assessment of student performance in clinical simulations. The study protocol used nine 5-10 minute video vignettes portraying different levels of competency in student performance in simulated clinical interactions. Intra-Class Correlation (ICC) was used to calculate inter-rater reliability and Fisher exact test was used to compare differences in distribution of scores between expert and nonexpert assessments. Results. Preceptors (n=42) across 5 states assessed the simulated performances. Intra-Class Correlation estimates were higher for 3 nonrandomized video simulations compared to the 6 randomized simulations. Preceptors more readily identified high and low student performances compared to satisfactory performances. In nearly two-thirds of the rating opportunities, a higher proportion of expert panelists than preceptors rated the student performance correctly (18 of 27 scenarios). Conclusion. Valid and reliable assessments are critically important because they affect student grades and formative student feedback. Study results indicate the need for pharmacy preceptor training in performance assessment. The process demonstrated in this study can be used to establish minimum preceptor benchmarks for future national training programs. PMID:26089563

  11. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  12. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    PubMed

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  13. Estimating the intra-cluster correlation coefficient for evaluating an educational intervention program to improve rabies awareness and dog bite prevention among children in Sikkim, India: A pilot study.

    PubMed

    Auplish, Aashima; Clarke, Alison S; Van Zanten, Trent; Abel, Kate; Tham, Charmaine; Bhutia, Thinlay N; Wilks, Colin R; Stevenson, Mark A; Firestone, Simon M

    2017-05-01

    Educational initiatives targeting at-risk populations have long been recognized as a mainstay of ongoing rabies control efforts. Cluster-based studies are often utilized to assess levels of knowledge, attitudes and practices of a population in response to education campaigns. The design of cluster-based studies requires estimates of intra-cluster correlation coefficients obtained from previous studies. This study estimates the school-level intra-cluster correlation coefficient (ICC) for rabies knowledge change following an educational intervention program. A cross-sectional survey was conducted with 226 students from 7 schools in Sikkim, India, using cluster sampling. In order to assess knowledge uptake, rabies education sessions with pre- and post-session questionnaires were administered. Paired differences of proportions were estimated for questions answered correctly. A mixed effects logistic regression model was developed to estimate school-level and student-level ICCs and to test for associations between gender, age, school location and educational level. The school- and student-level ICCs for rabies knowledge and awareness were 0.04 (95% CI: 0.01, 0.19) and 0.05 (95% CI: 0.2, 0.09), respectively. These ICCs suggest design effect multipliers of 5.45 schools and 1.05 students per school, will be required when estimating sample sizes and designing future cluster randomized trials. There was a good baseline level of rabies knowledge (mean pre-session score 71%), however, key knowledge gaps were identified in understanding appropriate behavior around scared dogs, potential sources of rabies and how to correctly order post rabies exposure precaution steps. After adjusting for the effect of gender, age, school location and education level, school and individual post-session test scores improved by 19%, with similar performance amongst boys and girls attending schools in urban and rural regions. The proportion of participants that were able to correctly order post-exposure precautionary steps following educational intervention increased by 87%. The ICC estimates presented in this study will aid in designing cluster-based studies evaluating educational interventions as part of disease control programs. This study demonstrates the likely benefits of educational intervention incorporating bite prevention and rabies education. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  15. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  16. Probalistic Criticality Consequence Evaluation (SCPB:N/A)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Gottlieb; J.W. Davis; J.R. Massari

    1996-09-04

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development (WPD) department with the objective of providing a comprehensive, conservative estimate of the consequences of the criticality which could possibly occur as the result of commercial spent nuclear fuel emplaced in the underground repository at Yucca Mountain. The consequences of criticality are measured principally in terms of the resulting changes in radionuclide inventory as a function of the power level and duration of the criticality. The purpose of this analysis is to extend the prior estimates of increased radionuclide inventory (Refs. 5.52 and 5.54), for bothmore » internal and external criticality. This analysis, and similar estimates and refinements to be completed before the end of fiscal year 1997, will be provided as input to Total System Performance Assessment-Viability Assessment (TSPA-VA) to demonstrate compliance with the repository performance objectives.« less

  17. Ridge Regression Signal Processing

    NASA Technical Reports Server (NTRS)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  18. Novel trace chemical detection algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros

    2017-05-01

    Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.

  19. Physical activity and motor decline in older persons.

    PubMed

    Buchman, A S; Boyle, P A; Wilson, R S; Bienias, Julia L; Bennett, D A

    2007-03-01

    We tested the hypothesis that physical activity modifies the course of age-related motor decline. More than 850 older participants of the Rush Memory and Aging Project underwent baseline assessment of physical activity and annual motor testing for up to 8 years. Nine strength measures and nine motor performance measures were summarized into composite measures of motor function. In generalized estimating equation models, global motor function declined during follow-up (estimate, -0.072; SE, 0.008; P < 0.001). Each additional hour of physical activity at baseline was associated with about a 5% decrease in the rate of global motor function decline (estimate, 0.004; SE, 0.001; P = 0.007). Secondary analyses suggested that the association of physical activity with motor decline was mostly due to the effect of physical activity on the rate of motor performance decline. Thus, higher levels of physical activity are associated with a slower rate of motor decline in older persons.

  20. Robust Low-dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization

    PubMed Central

    Zhang, Shaoting; Chen, Tsuhan; Sanelli, Pina C.

    2016-01-01

    Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. ‘Time is brain’ is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in-vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions. PMID:25706579

  1. Optimizing focal plane electric field estimation for detecting exoplanets

    NASA Astrophysics Data System (ADS)

    Groff, T.; Kasdin, N. J.; Riggs, A. J. E.

    Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.

  2. Quantification of gait changes in subjects with visual height intolerance when exposed to heights

    PubMed Central

    Schniepp, Roman; Kugler, Günter; Wuehr, Max; Eckl, Maria; Huppert, Doreen; Huth, Sabrina; Pradhan, Cauchy; Jahn, Klaus; Brandt, Thomas

    2014-01-01

    Introduction: Visual height intolerance (vHI) manifests as instability at heights with apprehension of losing balance or falling. We investigated contributions of visual feedback and attention on gait performance of subjects with vHI. Materials and Methods: Sixteen subjects with vHI walked over a gait mat (GAITRite®) on a 15-m-high balcony and at ground-level. Subjects walked at different speeds (slow, preferred, fast), during changes of the visual input (gaze straight/up/down; eyes open/closed), and while doing a cognitive task. An rmANOVA with the factors “height situation” and “gait condition” was performed. Subjects were also asked to estimate the height of the balcony over ground level. The individual estimates were used for correlations with the gait parameters. Results: Study participants walked slower at heights, with reduced cadence and stride length. The double support phases were increased (all p < 0.01), which correlated with the estimated height of the balcony (R2 = 0.453, p < 0.05). These changes were still present when walking with upward gaze or closure of the eyes. Under the conditions walking and looking down to the floor of the balcony, during dual-task and fast walking, there were no differences between the gait performance on the balcony and at ground-level. Discussion: The found gait changes are features of a cautious gait control. Internal, cognitive models with anxiety play an important role for vHI; gait was similarly affected when the visual perception of the depth was prevented. Improvement by dual task at heights may be associated by a reduction of the anxiety level. Conclusion: It is conceivable that mental distraction by dual task or increasing the walking speed might be useful recommendations to reduce the imbalance during locomotion in subjects susceptible to vHI. PMID:25538595

  3. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  4. ESTIMATION OF ADULT PATIENT DOSES FOR CHEST X-RAY EXAMINATIONS AND COMPARISON WITH DIAGNOSTIC REFERENCE LEVELS (DRLs).

    PubMed

    Bas Mor, H; Altinsoy, N; Söyler, I

    2018-05-08

    The aim of this study was to evaluate the radiation doses to patient during chest (posterior anterior/and lateral) examinations. The study was performed in three public hospitals of İstanbul province with a total of 300 adult patients. Entrance surface dose (ESD) measurements were conducted on computed radiography, digital radiography and screen film system. ESD was estimated by using International Atomic Energy Agency (IAEA) model and Davies model which are the common indirect models. Results were compared with diagnostic reference levels from the European Commission, IAEA and National Radiological Protection Board. Although the results are compatible with the international diagnostic reference levels, they present variations between the hospitals. Dose variations for the same type of X-ray examination support the idea that further optimization is possible.

  5. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  6. A Portuguese value set for the SF-6D.

    PubMed

    Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Brazier, John; Rowen, Donna

    2010-08-01

    The SF-6D is a preference-based measure of health derived from the SF-36 that can be used for cost-effectiveness analysis using cost-per-quality adjusted life-year analysis. This study seeks to estimate a system weight for the SF-6D for Portugal and to compare the results with the UK system weights. A sample of 55 health states defined by the SF-6D has been valued by a representative random sample of the Portuguese population, stratified by sex and age (n = 140), using the Standard Gamble (SG). Several models are estimated at both the individual and aggregate levels for predicting health-state valuations. Models with main effects, with interaction effects and with the constant forced to unity are presented. Random effects (RE) models are estimated using generalized least squares (GLS) regressions. Generalized estimation equations (GEE) are used to estimate RE models with the constant forced to unity. Estimations at the individual level were performed using 630 health-state valuations. Alternative functional forms are considered to account for the skewed distribution of health-state valuations. The models are analyzed in terms of their coefficients, overall fit, and the ability for predicting the SG-values. The RE models estimated using GLS and through GEE produce significant coefficients, which are robust across model specification. However, there are concerns regarding some inconsistent estimates, and so parsimonious consistent models were estimated. There is evidence of under prediction in some states assigned to poor health. The results are consistent with the UK results. The models estimated provide preference-based quality of life weights for the Portuguese population when health status data have been collected using the SF-36. Although the sample was randomly drowned findings should be treated with caution, given the small sample size, even knowing that they have been estimated at the individual level.

  7. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  8. The psychophysics of workload - A second look at the relationship between subjective measures and performance

    NASA Technical Reports Server (NTRS)

    Gopher, D.; Chillag, N.; Arzi, N.

    1985-01-01

    Load estimates based upon subjective and performance indices were compared for subjects performing size matching and letter typing tasks under 6 levels of priorities, in single and dual task conditions. Each half of the group used a different task as reference in their subjective judgement. The results are interpreted to indicate that subjective measures are especially sensitive to voluntary allocation of attention and to the load on working memory. Association with performance is expected whenever these two factors are main determinants of performance efficiency, otherwise the two are likely to dissociate.

  9. Considerations for Estimating Electrode Performance in Li-Ion Cells

    NASA Technical Reports Server (NTRS)

    Bennett, William R.

    2012-01-01

    Advanced electrode materials with increased specific capacity and voltage performance are critical to the development of Li-ion batteries with increased specific energy and energy density. Although performance metrics for individual electrodes are critically important, a fundamental understanding of the interactions of electrodes in a full cell is essential to achieving the desired performance, and for establishing meaningful goals for electrode performance. This paper presents practical design considerations for matching positive and negative electrodes in a viable design. Methods for predicting cell-level discharge voltage, based on laboratory data for individual electrodes, are presented and discussed.

  10. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach.

    PubMed

    Xu, Nan; Spreng, R Nathan; Doerschuk, Peter C

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  11. Assessing the Reliability of Regional Depth-Duration-Frequency Equations for Gauged and Ungauged Sites

    NASA Astrophysics Data System (ADS)

    Castellarin, A.; Montanari, A.; Brath, A.

    2002-12-01

    The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.

  12. New data-driven estimation of terrestrial CO2 fluxes in Asia using a standardized database of eddy covariance measurements, remote sensing data, and support vector regression

    NASA Astrophysics Data System (ADS)

    Ichii, Kazuhito; Ueyama, Masahito; Kondo, Masayuki; Saigusa, Nobuko; Kim, Joon; Alberto, Ma. Carmelita; Ardö, Jonas; Euskirchen, Eugénie S.; Kang, Minseok; Hirano, Takashi; Joiner, Joanna; Kobayashi, Hideki; Marchesini, Luca Belelli; Merbold, Lutz; Miyata, Akira; Saitoh, Taku M.; Takagi, Kentaro; Varlagin, Andrej; Bret-Harte, M. Syndonia; Kitamura, Kenzo; Kosugi, Yoshiko; Kotani, Ayumi; Kumar, Kireet; Li, Sheng-Gong; Machimura, Takashi; Matsuura, Yojiro; Mizoguchi, Yasuko; Ohta, Takeshi; Mukherjee, Sandipan; Yanagi, Yuji; Yasuda, Yukio; Zhang, Yiping; Zhao, Fenghua

    2017-04-01

    The lack of a standardized database of eddy covariance observations has been an obstacle for data-driven estimation of terrestrial CO2 fluxes in Asia. In this study, we developed such a standardized database using 54 sites from various databases by applying consistent postprocessing for data-driven estimation of gross primary productivity (GPP) and net ecosystem CO2 exchange (NEE). Data-driven estimation was conducted by using a machine learning algorithm: support vector regression (SVR), with remote sensing data for 2000 to 2015 period. Site-level evaluation of the estimated CO2 fluxes shows that although performance varies in different vegetation and climate classifications, GPP and NEE at 8 days are reproduced (e.g., r2 = 0.73 and 0.42 for 8 day GPP and NEE). Evaluation of spatially estimated GPP with Global Ozone Monitoring Experiment 2 sensor-based Sun-induced chlorophyll fluorescence shows that monthly GPP variations at subcontinental scale were reproduced by SVR (r2 = 1.00, 0.94, 0.91, and 0.89 for Siberia, East Asia, South Asia, and Southeast Asia, respectively). Evaluation of spatially estimated NEE with net atmosphere-land CO2 fluxes of Greenhouse Gases Observing Satellite (GOSAT) Level 4A product shows that monthly variations of these data were consistent in Siberia and East Asia; meanwhile, inconsistency was found in South Asia and Southeast Asia. Furthermore, differences in the land CO2 fluxes from SVR-NEE and GOSAT Level 4A were partially explained by accounting for the differences in the definition of land CO2 fluxes. These data-driven estimates can provide a new opportunity to assess CO2 fluxes in Asia and evaluate and constrain terrestrial ecosystem models.

  13. Lithostratigraphic, borehole-geophysical, hydrogeologic, and hydrochemical data from the East Bay Plain, Alameda County, California

    USGS Publications Warehouse

    Sneed, Michelle; Orlando, Patricia v.P.; Borchers, James W.; Everett, Rhett; Solt, Michael; McGann, Mary; Lowers, Heather; Mahan, Shannon

    2015-01-01

    Water-level and aquifer-system-compaction measurements, which indicated diurnal and seasonal fluctuations, were made at the Bayside Groundwater Project site. Slug tests were performed at the Bayside piezometers and nine pre-existing wells to estimate hydraulic conductivity.

  14. Percutaneous Trigger Finger Release: A Cost-effectiveness Analysis.

    PubMed

    Gancarczyk, Stephanie M; Jang, Eugene S; Swart, Eric P; Makhni, Eric C; Kadiyala, Rajendra Kumar

    2016-07-01

    Percutaneous trigger finger releases (TFRs) performed in the office setting are becoming more prevalent. This study compares the costs of in-hospital open TFRs, open TFRs performed in ambulatory surgical centers (ASCs), and in-office percutaneous releases. An expected-value decision-analysis model was constructed from the payer perspective to estimate total costs of the three competing treatment strategies for TFR. Model parameters were estimated based on the best available literature and were tested using multiway sensitivity analysis. Percutaneous TFR performed in the office and then, if needed, revised open TFR performed in the ASC, was the most cost-effective strategy, with an attributed cost of $603. The cost associated with an initial open TFR performed in the ASC was approximately 7% higher. Initial open TFR performed in the hospital was the least cost-effective, with an attributed cost nearly twice that of primary percutaneous TFR. An initial attempt at percutaneous TFR is more cost-effective than an open TFR. Currently, only about 5% of TFRs are performed in the office; therefore, a substantial opportunity exists for cost savings in the future. Decision model level II.

  15. Using exploratory data analysis to identify and predict patterns of human Lyme disease case clustering within a multistate region, 2010-2014.

    PubMed

    Hendricks, Brian; Mark-Carew, Miguella

    2017-02-01

    Lyme disease is the most commonly reported vectorborne disease in the United States. The objective of our study was to identify patterns of Lyme disease reporting after multistate inclusion to mitigate potential border effects. County-level human Lyme disease surveillance data were obtained from Kentucky, Maryland, Ohio, Pennsylvania, Virginia, and West Virginia state health departments. Rate smoothing and Local Moran's I was performed to identify clusters of reporting activity and identify spatial outliers. A logistic generalized estimating equation was performed to identify significant associations in disease clustering over time. Resulting analyses identified statistically significant (P=0.05) clusters of high reporting activity and trends over time. High reporting activity aggregated near border counties in high incidence states, while low reporting aggregated near shared county borders in non-high incidence states. Findings highlight the need for exploratory surveillance approaches to describe the extent to which state level reporting affects accurate estimation of Lyme disease progression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Genetic Diversity Analysis of Highly Incomplete SNP Genotype Data with Imputations: An Empirical Assessment

    PubMed Central

    Fu, Yong-Bi

    2014-01-01

    Genotyping by sequencing (GBS) recently has emerged as a promising genomic approach for assessing genetic diversity on a genome-wide scale. However, concerns are not lacking about the uniquely large unbalance in GBS genotype data. Although some genotype imputation has been proposed to infer missing observations, little is known about the reliability of a genetic diversity analysis of GBS data, with up to 90% of observations missing. Here we performed an empirical assessment of accuracy in genetic diversity analysis of highly incomplete single nucleotide polymorphism genotypes with imputations. Three large single-nucleotide polymorphism genotype data sets for corn, wheat, and rice were acquired, and missing data with up to 90% of missing observations were randomly generated and then imputed for missing genotypes with three map-independent imputation methods. Estimating heterozygosity and inbreeding coefficient from original, missing, and imputed data revealed variable patterns of bias from assessed levels of missingness and genotype imputation, but the estimation biases were smaller for missing data without genotype imputation. The estimates of genetic differentiation were rather robust up to 90% of missing observations but became substantially biased when missing genotypes were imputed. The estimates of topology accuracy for four representative samples of interested groups generally were reduced with increased levels of missing genotypes. Probabilistic principal component analysis based imputation performed better in terms of topology accuracy than those analyses of missing data without genotype imputation. These findings are not only significant for understanding the reliability of the genetic diversity analysis with respect to large missing data and genotype imputation but also are instructive for performing a proper genetic diversity analysis of highly incomplete GBS or other genotype data. PMID:24626289

  17. Body Estimation and Physical Performance: Estimation of Lifting and Carrying from Fat-Free Mass.

    DTIC Science & Technology

    1998-10-30

    demanding Navy jobs is associat- ed with greater rates of low back injuries (Vickers, Hervig and White, 1997). Vickers (personal commu- nication) unpublished...adequate strength to reduce the risk of injury on the job to levels of less demanding jobs. The rate of injury on the job might be reduced if strength...of fatness. Individuals for whom body weight is elevated due to the presence of a large muscle mass (e.g. weightlifters ), do not have the same health

  18. Initial retrieval sequence and blending strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemwell, D.L.; Grenard, C.E.

    1996-09-01

    This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.

  19. Optimization of Skill Retention in the U.S. Army through Initial Training Analysis and Design. Volume 1.

    DTIC Science & Technology

    1983-05-01

    observed end-of-course scores for tasks .- trained to criterion. e MGA software was calibrated to provide retention estimates at two levels of...exceed the MGA estimates. Thirty-five out of forty, or 87.5,o0 of the tasks met this expectation. . * For these first trial data, MGA software predicts...Objective: The objective of this effort was to perform an operational test of the capability of MGA Skill Training and Retention (STAR©) software to

  20. Operative and consultative proportions of neurosurgical disease worldwide: estimation from the surgeon perspective.

    PubMed

    Dewan, Michael C; Rattani, Abbas; Baticulon, Ronnie E; Faruque, Serena; Johnson, Walter D; Dempsey, Robert J; Haglund, Michael M; Alkire, Blake C; Park, Kee B; Warf, Benjamin C; Shrime, Mark G

    2018-05-11

    OBJECTIVE The global magnitude of neurosurgical disease is unknown. The authors sought to estimate the surgical and consultative proportion of diseases commonly encountered by neurosurgeons, as well as surgeon case volume and perceived workload. METHODS An electronic survey was sent to 193 neurosurgeons previously identified via a global surgeon mapping initiative. The survey consisted of three sections aimed at quantifying surgical incidence of neurological disease, consultation incidence, and surgeon demographic data. Surgeons were asked to estimate the proportion of 11 neurological disorders that, in an ideal world, would indicate either neurosurgical operation or neurosurgical consultation. Respondent surgeons indicated their confidence level in each estimate. Demographic and surgical practice characteristics-including case volume and perceived workload-were also captured. RESULTS Eighty-five neurosurgeons from 57 countries, representing all WHO regions and World Bank income levels, completed the survey. Neurological conditions estimated to warrant neurosurgical consultation with the highest frequency were brain tumors (96%), spinal tumors (95%), hydrocephalus (94%), and neural tube defects (92%), whereas stroke (54%), central nervous system infection (58%), and epilepsy (40%) carried the lowest frequency. Similarly, surgery was deemed necessary for an average of 88% cases of hydrocephalus, 82% of spinal tumors and neural tube defects, and 78% of brain tumors. Degenerative spine disease (42%), stroke (31%), and epilepsy (24%) were found to warrant surgical intervention less frequently. Confidence levels were consistently high among respondents (lower quartile > 70/100 for 90% of questions), and estimates did not vary significantly across WHO regions or among income levels. Surgeons reported performing a mean of 245 cases annually (median 190). On a 100-point scale indicating a surgeon's perceived workload (0-not busy, 100-overworked), respondents selected a mean workload of 75 (median 79). CONCLUSIONS With a high level of confidence and strong concordance, neurosurgeons estimated that the vast majority of patients with central nervous system tumors, hydrocephalus, or neural tube defects mandate neurosurgical involvement. A significant proportion of other common neurological diseases, such as traumatic brain and spinal injury, vascular anomalies, and degenerative spine disease, demand the attention of a neurosurgeon-whether via operative intervention or expert counsel. These estimates facilitate measurement of the expected annual volume of neurosurgical disease globally.

  1. Use of image analysis to estimate anthocyanin and UV-excited fluorescent phenolic compound levels in strawberry fruit

    PubMed Central

    Yoshioka, Yosuke; Nakayama, Masayoshi; Noguchi, Yuji; Horie, Hideki

    2013-01-01

    Strawberry is rich in anthocyanins, which are responsible for the red color, and contains several colorless phenolic compounds. Among the colorless phenolic compounds, some, such as hydroxycinammic acid derivatives, emit blue-green fluorescence when excited with ultraviolet (UV) light. Here, we investigated the effectiveness of image analyses for estimating the levels of anthocyanins and UV-excited fluorescent phenolic compounds in fruit. The fruit skin and cut surface of 12 cultivars were photographed under visible and UV light conditions; colors were evaluated based on the color components of images. The levels of anthocyanins and UV-excited fluorescent compounds in each fruit were also evaluated by spectrophotometric and high performance liquid chromatography (HPLC) analyses, respectively and relationships between these levels and the image data were investigated. Red depth of the fruits differed greatly among the cultivars and anthocyanin content was well estimated based on the color values of the cut surface images. Strong UV-excited fluorescence was observed on the cut surfaces of several cultivars, and the grayscale values of the UV-excited fluorescence images were markedly correlated with the levels of those fluorescent compounds as evaluated by HPLC analysis. These results indicate that image analyses can select promising genotypes rich in anthocyanins and fluorescent phenolic compounds. PMID:23853516

  2. Does Food Insecurity at Home Affect Non-Cognitive Performance at School? A Longitudinal Analysis of Elementary Student Classroom Behavior

    ERIC Educational Resources Information Center

    Howard, Larry L.

    2011-01-01

    This paper estimates models of the transitional effects of food insecurity experiences on children's non-cognitive performance in school classrooms using a panel of 4710 elementary students enrolled in 1st, 3rd, and 5th grade (1999-2003). In addition to an extensive set of child and household-level characteristics, we use information on U.S.…

  3. Microarray image analysis: background estimation using quantile and morphological filters.

    PubMed

    Bengtsson, Anders; Bengtsson, Henrik

    2006-02-28

    In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in order to get unbiased estimates these filters have to be implemented with great care. The performance of morphological opening is in general poor with a substantial spatial-dependent bias.

  4. Plume Tracker: Interactive mapping of volcanic sulfur dioxide emissions with high-performance radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Realmuto, Vincent J.; Berk, Alexander

    2016-11-01

    We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.

  5. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  6. Artifact removal in the context of group ICA: a comparison of single-subject and group approaches

    PubMed Central

    Du, Yuhui; Allen, Elena A.; He, Hao; Sui, Jing; Wu, Lei; Calhoun, Vince D.

    2018-01-01

    Independent component analysis (ICA) has been widely applied to identify intrinsic brain networks from fMRI data. Group ICA computes group-level components from all data and subsequently estimates individual-level components to recapture inter-subject variability. However, the best approach to handle artifacts, which may vary widely among subjects, is not yet clear. In this work, we study and compare two ICA approaches for artifacts removal. One approach, recommended in recent work by the Human Connectome Project, first performs ICA on individual subject data to remove artifacts, and then applies a group ICA on the cleaned data from all subjects. We refer to this approach as Individual ICA based artifacts Removal Plus Group ICA (IRPG). A second proposed approach, called Group Information Guided ICA (GIG-ICA), performs ICA on group data, then removes the group-level artifact components, and finally performs subject-specific ICAs using the group-level non-artifact components as spatial references. We used simulations to evaluate the two approaches with respect to the effects of data quality, data quantity, variable number of sources among subjects, and spatially unique artifacts. Resting-state test-retest datasets were also employed to investigate the reliability of functional networks. Results from simulations demonstrate GIG-ICA has greater performance compared to IRPG, even in the case when single-subject artifacts removal is perfect and when individual subjects have spatially unique artifacts. Experiments using test-retest data suggest that GIG-ICA provides more reliable functional networks. Based on high estimation accuracy, ease of implementation, and high reliability of functional networks, we find GIG-ICA to be a promising approach. PMID:26859308

  7. Comparing the Advanced REACH Tool's (ART) Estimates With Switzerland's Occupational Exposure Data.

    PubMed

    Savic, Nenad; Gasic, Bojan; Schinkel, Jody; Vernez, David

    2017-10-01

    The Advanced REACH Tool (ART) is the most sophisticated tool used for evaluating exposure levels under the European Union's Registration, Evaluation, Authorisation and restriction of CHemicals (REACH) regulations. ART provides estimates at different percentiles of exposure and within different confidence intervals (CIs). However, its performance has only been tested on a limited number of exposure data. The present study compares ART's estimates with exposure measurements collected over many years in Switzerland. Measurements from 584 cases of exposure to vapours, mists, powders, and abrasive dusts (wood/stone and metal) were extracted from a Swiss database. The corresponding exposures at the 50th and 90th percentiles were calculated in ART. To characterize the model's performance, the 90% CI of the estimates was considered. ART's performance at the 50th percentile was only found to be insufficiently conservative with regard to exposure to wood/stone dusts, whereas the 90th percentile showed sufficient conservatism for all the types of exposure processed. However, a trend was observed with the residuals, where ART overestimated lower exposures and underestimated higher ones. The median was more precise, however, and the majority (≥60%) of real-world measurements were within a factor of 10 from ART's estimates. We provide recommendations based on the results and suggest further, more comprehensive, investigations. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  8. Phenological features for winter rapeseed identification in Ukraine using satellite data

    NASA Astrophysics Data System (ADS)

    Kravchenko, Oleksiy

    2014-05-01

    Winter rapeseed is one of the major oilseed crops in Ukraine that is characterized by high profitability and often grown with violations of the crop rotation requirements leading to soil degradation. Therefore, rapeseed identification using satellite data is a promising direction for operational estimation of the crop acreage and rotation control. Crop acreage of rapeseed is about 0.5-3% of total area of Ukraine, which poses a major problem for identification using satellite data [1]. While winter rapeseed could be classified using biomass features observed during autumn vegetation, these features are quite unstable due to field to field differences in planting dates as well as spatial and temporal heterogeneity in soil moisture availability. Due to this fact autumn biomass level features could be used only locally (at NUTS-3 level) and are not suitable for large-scale country wide crop identification. We propose to use crop parameters at flowering phenological stage for crop identification and present a method for parameters estimation using time-series of moderate resolution data. Rapeseed flowering could be observed as a bell-shaped peak in red reflectance time series. However the duration of the flowering period that is observable by satellite data is about only two weeks, which is quite short period taking into account inevitable cloud coverage issues. Thus we need daily time series to resolve the flowering peak and due to this we are limited to moderate resolution data. We used daily atmospherically corrected MODIS data coming from Terra and Aqua satellites within 90-160 DOY period to perform features calculations. Empirical BRDF correction is used to minimize angular effects. We used Gaussian Processes Regression (GPR) for temporal interpolation to minimize errors due to residual could coverage, atmospheric correction and a mixed pixel problems. We estimate 12 parameters for each time series. They are red and near-infrared (NIR) reflectance and the timing at four stages: before and after the flowering, at the peak flowering and at the maximum NIR level. We used Support Vector Machine for data classification. The most relevant feature for classification is flowering peak timing followed by flowering peak magnitude. The dependency of the peak time on the latitude as a sole feature could be used to reject 90% of non-rapeseed pixels that is greatly reduces the imbalance of the classification problem. To assess the accuracy of our approach we performed a stratified area frame sampling survey in Odessa region (NUTS-2 level) in 2013. The omission error is about 12.6% while commission error is higher at the level of 22%. This fact is explained by high viewing angle composition criterion that is used in our approach to mitigate high cloud coverage problem. However the errors are quite stable spatially and could be easily corrected by regression technique. To do this we performed area estimation for Odessa region using regression estimator and obtained good area estimation accuracy with 4.6% error (1σ). [1] Gallego, F.J., et al., Efficiency assessment of using satellite data for crop area estimation in Ukraine. Int. J. Appl. Earth Observ. Geoinf. (2014), http://dx.doi.org/10.1016/j.jag.2013.12.013

  9. Study of biological communities subject to imperfect detection: Bias and precision of community N-mixture abundance models in small-sample situations

    USGS Publications Warehouse

    Yamaura, Yuichi; Kery, Marc; Royle, Andy

    2016-01-01

    Community N-mixture abundance models for replicated counts provide a powerful and novel framework for drawing inferences related to species abundance within communities subject to imperfect detection. To assess the performance of these models, and to compare them to related community occupancy models in situations with marginal information, we used simulation to examine the effects of mean abundance (λ¯: 0.1, 0.5, 1, 5), detection probability (p¯: 0.1, 0.2, 0.5), and number of sampling sites (n site : 10, 20, 40) and visits (n visit : 2, 3, 4) on the bias and precision of species-level parameters (mean abundance and covariate effect) and a community-level parameter (species richness). Bias and imprecision of estimates decreased when any of the four variables (λ¯, p¯, n site , n visit ) increased. Detection probability p¯ was most important for the estimates of mean abundance, while λ¯ was most influential for covariate effect and species richness estimates. For all parameters, increasing n site was more beneficial than increasing n visit . Minimal conditions for obtaining adequate performance of community abundance models were n site  ≥ 20, p¯ ≥ 0.2, and λ¯ ≥ 0.5. At lower abundance, the performance of community abundance and community occupancy models as species richness estimators were comparable. We then used additive partitioning analysis to reveal that raw species counts can overestimate β diversity both of species richness and the Shannon index, while community abundance models yielded better estimates. Community N-mixture abundance models thus have great potential for use with community ecology or conservation applications provided that replicated counts are available.

  10. Estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin in a group of Swedish diabetics.

    PubMed

    Ilbäck, N-G; Alzin, M; Jahrl, S; Enghardt-Barbieri, H; Busk, L

    2003-02-01

    Few sweetener intake studies have been performed on the general population and only one study has been specifically designed to investigate diabetics and children. This report describes a Swedish study on the estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin by children (0-15 years) and adult male and female diabetics (types I and II) of various ages (16-90 years). Altogether, 1120 participants were asked to complete a questionnaire about their sweetener intake. The response rate (71%, range 59-78%) was comparable across age and gender groups. The most consumed 'light' foodstuffs were diet soda, cider, fruit syrup, table powder, table tablets, table drops, ice cream, chewing gum, throat lozenges, sweets, yoghurt and vitamin C. The major sources of sweetener intake were beverages and table powder. About 70% of the participants, equally distributed across all age groups, read the manufacturer's specifications of the food products' content. The estimated intakes showed that neither men nor women exceeded the ADI for acesulfame-K; however, using worst-case calculations, high intakes were found in young children (169% of ADI). In general, the aspartame intake was low. Children had the highest estimated (worst case) intake of cyclamate (317% of ADI). Children's estimated intake of saccharin only slightly exceeded the ADI at the 5% level for fruit syrup. Children had an unexpected high intake of tabletop sweeteners, which, in Sweden, is normally based on cyclamate. The study was performed during two winter months when it can be assumed that the intake of sweeteners was lower as compared with during warm, summer months. Thus, the present study probably underestimates the average intake on a yearly basis. However, our worst-case calculations based on maximum permitted levels were performed on each individual sweetener, although exposure is probably relatively evenly distributed among all sweeteners, except for cyclamate containing table sweeteners.

  11. Comparison of Satellite Data with Ground-Based Measurements for Assessing Local Distributions of PM2.5 in Northeast Mexico.

    NASA Astrophysics Data System (ADS)

    Carmona, J.; Mendoza, A.; Lozano, D.; Gupta, P.; Mejia, G.; Rios, J.; Hernández, I.

    2017-12-01

    Estimating ground-level PM2.5 from satellite-derived Aerosol Optical Depth (AOD) through statistical models is a promising method to evaluate the spatial and temporal distribution of PM2.5 in regions where there are no or few ground-based observations, i.e. Latin America. Although PM concentrations are most accurately measured using ground-based instrumentation, the spatial coverage is too sparse to determine local and regional variations in PM. AOD satellite data offer the opportunity to overcome the spatial limitation of ground-based measurements. However, estimating PM surface concentrations from AOD satellite data is challenging, since multiple factors can affect the relationship between the total-column of AOD and the surface-concentration of PM. In this study, an Assembled Multiple Linear Regression Model (MLR) and a Neural Network Model (NN) were performed to estimate the relationship between the AOD and ground-concentrations of PM2.5 within the Monterrey Metropolitan Area (MMA). The MMA is located in northeast Mexico and is the third most populated urban area in the country. Episodes of high PM pollution levels are frequent throughout the year at the MMA. Daily averages of meteorological and air quality parameters were determined from data recorded at 5 monitoring sites of the MMA air quality monitoring network. Daily AOD data were retrieved from the MODIS sensor onboard the Aqua satellite. Overall, the best performance of the models was obtained using an AOD at 550 µm from the MYD04_3k product in combination with Temperature, Relative Humidity, Wind Speed and Wind Direction ground-based data. For the MLR performed, a correlation coefficient of R 0.6 and % bias of -6% were obtained. The NN showed a better performance than the MLR, with a correlation coefficient of R 0.75 and % bias -4%. The results obtained confirmed that satellite-derived AOD in combination with meteorological fields may allow to estimate PM2.5 local distributions.

  12. Rate variation and estimation of divergence times using strict and relaxed clocks.

    PubMed

    Brown, Richard P; Yang, Ziheng

    2011-09-26

    Understanding causes of biological diversity may be greatly enhanced by knowledge of divergence times. Strict and relaxed clock models are used in Bayesian estimation of divergence times. We examined whether: i) strict clock models are generally more appropriate in shallow phylogenies where rate variation is expected to be low, ii) the likelihood ratio test of the clock (LRT) reliably informs which model is appropriate for dating divergence times. Strict and relaxed models were used to analyse sequences simulated under different levels of rate variation. Published shallow phylogenies (Black bass, Primate-sucking lice, Podarcis lizards, Gallotiinae lizards, and Caprinae mammals) were also analysed to determine natural levels of rate variation relative to the performance of the different models. Strict clock analyses performed well on data simulated under the independent rates model when the standard deviation of log rate on branches, σ, was low (≤ 0.1), but were inappropriate when σ>0.1 (95% of rates fall within 0.0082-0.0121 subs/site/Ma when σ = 0.1, for a mean rate of 0.01). The independent rates relaxed clock model performed well at all levels of rate variation, although posterior intervals on times were significantly wider than for the strict clock. The strict clock is therefore superior when rate variation is low. The performance of a correlated rates relaxed clock model was similar to the strict clock. Increased numbers of independent loci led to slightly narrower posteriors under the relaxed clock while older root ages provided proportionately narrower posteriors. The LRT had low power for σ = 0.01-0.1, but high power for σ = 0.5-2.0. Posterior means of σ2 were useful for assessing rate variation in published datasets. Estimates of natural levels of rate variation ranged from 0.05-3.38 for different partitions. Differences in divergence times between relaxed and strict clock analyses were greater in two datasets with higher σ2 for one or more partitions, supporting the simulation results. The strict clock can be superior for trees with shallow roots because of low levels of rate variation between branches. The LRT allows robust assessment of suitability of the clock model as does examination of posteriors on σ2.

  13. Performance of Chronic Kidney Disease Epidemiology Collaboration Creatinine-Cystatin C Equation for Estimating Kidney Function in Cirrhosis

    PubMed Central

    Mindikoglu, Ayse L.; Dowling, Thomas C.; Weir, Matthew R.; Seliger, Stephen L.; Christenson, Robert H.; Magder, Laurence S.

    2013-01-01

    Conventional creatinine-based glomerular filtration rate (GFR) equations are insufficiently accurate for estimating GFR in cirrhosis. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) recently proposed an equation to estimate GFR in subjects without cirrhosis using both serum creatinine and cystatin C levels. Performance of the new CKD-EPI creatinine-cystatin C equation (2012) was superior to previous creatinine- or cystatin C-based GFR equations. To evaluate the performance of the CKD-EPI creatinine-cystatin C equation in subjects with cirrhosis, we compared it to GFR measured by non-radiolabeled iothalamate plasma clearance (mGFR) in 72 subjects with cirrhosis. We compared the “bias”, “precision” and “accuracy” of the new CKD-EPI creatinine-cystatin C equation to that of 24-hour urinary creatinine clearance (CrCl), Cockcroft-Gault (CG) and previously reported creatinine- and/or cystatin C-based GFR-estimating equations. Accuracy of CKD-EPI creatinine-cystatin C equation as quantified by root mean squared error of difference scores [differences between mGFR and estimated GFR (eGFR) or between mGFR and CrCl, or between mGFR and CG equation for each subject] (RMSE=23.56) was significantly better than that of CrCl (37.69, P=0.001), CG (RMSE=36.12, P=0.002) and GFR-estimating equations based on cystatin C only. Its accuracy as quantified by percentage of eGFRs that differed by greater than 30% with respect to mGFR was significantly better compared to CrCl (P=0.024), CG (P=0.0001), 4-variable MDRD (P=0.027) and CKD-EPI creatinine 2009 (P=0.012) equations. However, for 23.61% of the subjects, GFR estimated by CKD-EPI creatinine-cystatin C equation differed from the mGFR by more than 30%. CONCLUSIONS The diagnostic performance of CKD-EPI creatinine-cystatin C equation (2012) in patients with cirrhosis was superior to conventional equations in clinical practice for estimating GFR. However, its diagnostic performance was substantially worse than reported in subjects without cirrhosis. PMID:23744636

  14. Time synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.; Huth, G. K.

    1981-01-01

    In a frequency-hopped (FH) multiple-frequency-shift-keyed (MFSK) communication system, frequency hopping causes the necessary frequency transitions for time synchronization estimation rather than the data sequence as in the conventional (nonfrequency-hopped) system. Making use of this observation, this paper presents a fine synchronization (i.e., time errors of less than a hop duration) technique for estimation of FH timing. The performance degradation due to imperfect FH time synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of hops used in the FH timing estimate.

  15. Bio-inspired sensing and control for disturbance rejection and stabilization

    NASA Astrophysics Data System (ADS)

    Gremillion, Gregory; Humbert, James S.

    2015-05-01

    The successful operation of small unmanned aircraft systems (sUAS) in dynamic environments demands robust stability in the presence of exogenous disturbances. Flying insects are sensor-rich platforms, with highly redundant arrays of sensors distributed across the insect body that are integrated to extract rich information with diminished noise. This work presents a novel sensing framework in which measurements from an array of accelerometers distributed across a simulated flight vehicle are linearly combined to directly estimate the applied forces and torques with improvements in SNR. In simulation, the estimation performance is quantified as a function of sensor noise level, position estimate error, and sensor quantity.

  16. Robust Maneuvering Envelope Estimation Based on Reachability Analysis in an Optimal Control Formulation

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John

    2013-01-01

    This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.

  17. Regional specific groundwater arsenic levels and neuropsychological functioning: a cross-sectional study.

    PubMed

    Edwards, Melissa; Johnson, Leigh; Mauer, Cortney; Barber, Robert; Hall, James; O'Bryant, Sid

    2014-01-01

    The purpose of the study was to examine the link between geographic information system (GIS)-estimated regional specific groundwater levels and neuropsychological functioning in a sample of individuals with and without cognitive impairment. This cross-sectional study design analyzed data from 1390 participants (733 Alzheimer's disease, 127 Mild Cognitive Impairment, and 530 with normal cognition) enrolled in the Texas Alzheimer's Research and Care Consortium. GISs analyses were used to estimate regional specific groundwater arsenic concentrations using the Environmental Systems Research Institute and arsenic concentrations from the Texas Water Development Board. In the full cohort, regional specific arsenic concentrations were positively associated with language abilities (p = 0.008), but associated with poorer verbal memory, immediate (p = 0.008), and delayed (p < 0.001), as well as poorer visual memory, immediate (p = 0.02), and delayed (p < 0.001). The findings varied by diagnostic category with arsenic being related with cognition most prominently among mild cognitive impairment cases. Overall, estimated regional specific groundwater arsenic levels were negatively associated with neuropsychological performance.

  18. Regional specific groundwater arsenic levels and neuropsychological functioning: a cross-sectional study

    PubMed Central

    Edwards, Melissa; Johnson, Leigh; Mauer, Cortney; Barber, Robert; Hall, James; O'Bryant, Sid

    2014-01-01

    Background The purpose of the study was to examine the link between GIS-estimated regional specific groundwater levels and neuropsychological functioning in a sample of individuals with and without cognitive impairment. Methods This cross-sectional study design analyzed data from 1390 participants (733 Alzheimer's disease, 127 Mild Cognitive Impairment, and 530 with normal cognition) enrolled in the Texas Alzheimer's Research and Care Consortium. Geographic information systems analyses were used to estimate regional specific groundwater arsenic concentrations using the Environmental Systems Research Institute and arsenic concentrations from the Texas Water Development Board. Results In the full cohort, regional specific arsenic concentrations were positively associated with language abilities (p=0.008), but associated with poorer verbal memory, immediate (p=0.008) and delayed (p<0.001) as well as poorer visual memory, immediate (p=0.02) and delayed (p<0.001). The findings varied by diagnostic category with arsenic being related with cognition most prominently among MCI cases. Conclusions Overall, estimated regional specific groundwater arsenic levels were negatively associated with neuropsychological performance. PMID:24506178

  19. Secondary task for full flight simulation incorporating tasks that commonly cause pilot error: Time estimation

    NASA Technical Reports Server (NTRS)

    Rosch, E.

    1975-01-01

    The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.

  20. Visual scanning behavior and pilot workload

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.

    1982-01-01

    This paper describes an experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary with the difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased with the estimated skill level of the pilots, with novices being affected by the loading task much more than experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.

  1. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  2. The Impact of Partial Measurement Invariance on Testing Moderation for Single and Multi-Level Data

    PubMed Central

    Hsiao, Yu-Yu; Lai, Mark H. C.

    2018-01-01

    Moderation effect is a commonly used concept in the field of social and behavioral science. Several studies regarding the implication of moderation effects have been done; however, little is known about how partial measurement invariance influences the properties of tests for moderation effects when categorical moderators were used. Additionally, whether the impact is the same across single and multilevel data is still unknown. Hence, the purpose of the present study is twofold: (a) To investigate the performance of the moderation test in single-level studies when measurement invariance does not hold; (b) To examine whether unique features of multilevel data, such as intraclass correlation (ICC) and number of clusters, influence the effect of measurement non-invariance on the performance of tests for moderation. Simulation results indicated that falsely assuming measurement invariance lead to biased estimates, inflated Type I error rates, and more gain or more loss in power (depends on simulation conditions) for the test of moderation effects. Such patterns were more salient as sample size and the number of non-invariant items increase for both single- and multi-level data. With multilevel data, the cluster size seemed to have a larger impact than the number of clusters when falsely assuming measurement invariance in the moderation estimation. ICC was trivially related to the moderation estimates. Overall, when testing moderation effects with categorical moderators, employing a model that accounts for the measurement (non)invariance structure of the predictor and/or the outcome is recommended. PMID:29867692

  3. The Impact of Partial Measurement Invariance on Testing Moderation for Single and Multi-Level Data.

    PubMed

    Hsiao, Yu-Yu; Lai, Mark H C

    2018-01-01

    Moderation effect is a commonly used concept in the field of social and behavioral science. Several studies regarding the implication of moderation effects have been done; however, little is known about how partial measurement invariance influences the properties of tests for moderation effects when categorical moderators were used. Additionally, whether the impact is the same across single and multilevel data is still unknown. Hence, the purpose of the present study is twofold: (a) To investigate the performance of the moderation test in single-level studies when measurement invariance does not hold; (b) To examine whether unique features of multilevel data, such as intraclass correlation (ICC) and number of clusters, influence the effect of measurement non-invariance on the performance of tests for moderation. Simulation results indicated that falsely assuming measurement invariance lead to biased estimates, inflated Type I error rates, and more gain or more loss in power (depends on simulation conditions) for the test of moderation effects. Such patterns were more salient as sample size and the number of non-invariant items increase for both single- and multi-level data. With multilevel data, the cluster size seemed to have a larger impact than the number of clusters when falsely assuming measurement invariance in the moderation estimation. ICC was trivially related to the moderation estimates. Overall, when testing moderation effects with categorical moderators, employing a model that accounts for the measurement (non)invariance structure of the predictor and/or the outcome is recommended.

  4. A Late Pleistocene sea level stack

    NASA Astrophysics Data System (ADS)

    Spratt, Rachel M.; Lisiecki, Lorraine E.

    2016-04-01

    Late Pleistocene sea level has been reconstructed from ocean sediment core data using a wide variety of proxies and models. However, the accuracy of individual reconstructions is limited by measurement error, local variations in salinity and temperature, and assumptions particular to each technique. Here we present a sea level stack (average) which increases the signal-to-noise ratio of individual reconstructions. Specifically, we perform principal component analysis (PCA) on seven records from 0 to 430 ka and five records from 0 to 798 ka. The first principal component, which we use as the stack, describes ˜ 80 % of the variance in the data and is similar using either five or seven records. After scaling the stack based on Holocene and Last Glacial Maximum (LGM) sea level estimates, the stack agrees to within 5 m with isostatically adjusted coral sea level estimates for Marine Isotope Stages 5e and 11 (125 and 400 ka, respectively). Bootstrapping and random sampling yield mean uncertainty estimates of 9-12 m (1σ) for the scaled stack. Sea level change accounts for about 45 % of the total orbital-band variance in benthic δ18O, compared to a 65 % contribution during the LGM-to-Holocene transition. Additionally, the second and third principal components of our analyses reflect differences between proxy records associated with spatial variations in the δ18O of seawater.

  5. Estimation of past seepage volumes from calcite distribution in the Topopah Spring Tuff, Yucca Mountain, Nevada

    USGS Publications Warehouse

    Marshall, B.D.; Neymark, L.A.; Peterman, Z.E.

    2003-01-01

    Low-temperature calcite and opal record the past seepage of water into open fractures and lithophysal cavities in the unsaturated zone at Yucca Mountain, Nevada, site of a proposed high-level radioactive waste repository. Systematic measurements of calcite and opal coatings in the Exploratory Studies Facility (ESF) tunnel at the proposed repository horizon are used to estimate the volume of calcite at each site of calcite and/or opal deposition. By estimating the volume of water required to precipitate the measured volumes of calcite in the unsaturated zone, seepage rates of 0.005 to 5 liters/year (l/year) are calculated at the median and 95th percentile of the measured volumes, respectively. These seepage rates are at the low end of the range of seepage rates from recent performance assessment (PA) calculations, confirming the conservative nature of the performance assessment. However, the distribution of the calcite and opal coatings indicate that a much larger fraction of the potential waste packages would be contacted by this seepage than is calculated in the performance assessment.

  6. Stage-discharge relationship in tidal channels

    NASA Astrophysics Data System (ADS)

    Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.

    2016-12-01

    Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.

  7. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  8. Spectral performance of Square Kilometre Array Antennas - II. Calibration performance

    NASA Astrophysics Data System (ADS)

    Trott, Cathryn M.; de Lera Acedo, Eloy; Wayth, Randall B.; Fagnoni, Nicolas; Sutinjo, Adrian T.; Wakley, Brett; Punzalan, Chris Ivan B.

    2017-09-01

    We test the bandpass smoothness performance of two prototype Square Kilometre Array (SKA) SKA1-Low log-periodic dipole antennas, SKALA2 and SKALA3 ('SKA Log-periodic Antenna'), and the current dipole from the Murchison Widefield Array (MWA) precursor telescope. Throughout this paper, we refer to the output complex-valued voltage response of an antenna when connected to a low-noise amplifier, as the dipole bandpass. In Paper I, the bandpass spectral response of the log-periodic antenna being developed for the SKA1-Low was estimated using numerical electromagnetic simulations and analysed using low-order polynomial fittings, and it was compared with the HERA antenna against the delay spectrum metric. In this work, realistic simulations of the SKA1-Low instrument, including frequency-dependent primary beam shapes and array configuration, are used with a weighted least-squares polynomial estimator to assess the ability of a given prototype antenna to perform the SKA Epoch of Reionisation (EoR) statistical experiments. This work complements the ideal estimator tolerances computed for the proposed EoR science experiments in Trott & Wayth, with the realized performance of an optimal and standard estimation (calibration) procedure. With a sufficient sky calibration model at higher frequencies, all antennas have bandpasses that are sufficiently smooth to meet the tolerances described in Trott & Wayth to perform the EoR statistical experiments, and these are primarily limited by an adequate sky calibration model and the thermal noise level in the calibration data. At frequencies of the Cosmic Dawn, which is of principal interest to SKA as one of the first next-generation telescopes capable of accessing higher redshifts, the MWA dipole and SKALA3 antenna have adequate performance, while the SKALA2 design will impede the ability to explore this era.

  9. Wikipedia usage estimates prevalence of influenza-like illness in the United States in near real-time.

    PubMed

    McIver, David J; Brownstein, John S

    2014-04-01

    Circulating levels of both seasonal and pandemic influenza require constant surveillance to ensure the health and safety of the population. While up-to-date information is critical, traditional surveillance systems can have data availability lags of up to two weeks. We introduce a novel method of estimating, in near-real time, the level of influenza-like illness (ILI) in the United States (US) by monitoring the rate of particular Wikipedia article views on a daily basis. We calculated the number of times certain influenza- or health-related Wikipedia articles were accessed each day between December 2007 and August 2013 and compared these data to official ILI activity levels provided by the Centers for Disease Control and Prevention (CDC). We developed a Poisson model that accurately estimates the level of ILI activity in the American population, up to two weeks ahead of the CDC, with an absolute average difference between the two estimates of just 0.27% over 294 weeks of data. Wikipedia-derived ILI models performed well through both abnormally high media coverage events (such as during the 2009 H1N1 pandemic) as well as unusually severe influenza seasons (such as the 2012-2013 influenza season). Wikipedia usage accurately estimated the week of peak ILI activity 17% more often than Google Flu Trends data and was often more accurate in its measure of ILI intensity. With further study, this method could potentially be implemented for continuous monitoring of ILI activity in the US and to provide support for traditional influenza surveillance tools.

  10. Wikipedia Usage Estimates Prevalence of Influenza-Like Illness in the United States in Near Real-Time

    PubMed Central

    McIver, David J.; Brownstein, John S.

    2014-01-01

    Circulating levels of both seasonal and pandemic influenza require constant surveillance to ensure the health and safety of the population. While up-to-date information is critical, traditional surveillance systems can have data availability lags of up to two weeks. We introduce a novel method of estimating, in near-real time, the level of influenza-like illness (ILI) in the United States (US) by monitoring the rate of particular Wikipedia article views on a daily basis. We calculated the number of times certain influenza- or health-related Wikipedia articles were accessed each day between December 2007 and August 2013 and compared these data to official ILI activity levels provided by the Centers for Disease Control and Prevention (CDC). We developed a Poisson model that accurately estimates the level of ILI activity in the American population, up to two weeks ahead of the CDC, with an absolute average difference between the two estimates of just 0.27% over 294 weeks of data. Wikipedia-derived ILI models performed well through both abnormally high media coverage events (such as during the 2009 H1N1 pandemic) as well as unusually severe influenza seasons (such as the 2012–2013 influenza season). Wikipedia usage accurately estimated the week of peak ILI activity 17% more often than Google Flu Trends data and was often more accurate in its measure of ILI intensity. With further study, this method could potentially be implemented for continuous monitoring of ILI activity in the US and to provide support for traditional influenza surveillance tools. PMID:24743682

  11. Multi-factorial analysis of class prediction error: estimating optimal number of biomarkers for various classification rules.

    PubMed

    Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter

    2010-12-01

    Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).

  12. Processing EOS MLS Level-2 Data

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van; Wu, Dong; Read, William; Jiang, Jonathan; Wagner, Paul; Livesey, Nathaniel; Schwartz, Michael; Filipiak, Mark; Pumphrey, Hugh; Shippony, Zvi

    2006-01-01

    A computer program performs level-2 processing of thermal-microwave-radiance data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS). The purpose of the processing is to estimate the composition and temperature of the atmosphere versus altitude from .8 to .90 km. "Level-2" as used here is a specialists f term signifying both vertical profiles of geophysical parameters along the measurement track of the instrument and processing performed by this or other software to generate such profiles. Designed to be flexible, the program is controlled via a configuration file that defines all aspects of processing, including contents of state and measurement vectors, configurations of forward models, measurement and calibration data to be read, and the manner of inverting the models to obtain the desired estimates. The program can operate in a parallel form in which one instance of the program acts a master, coordinating the work of multiple slave instances on a cluster of computers, each slave operating on a portion of the data. Optionally, the configuration file can be made to instruct the software to produce files of simulated radiances based on state vectors formed from sets of geophysical data-product files taken as input.

  13. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    PubMed

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  14. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  15. Relationships of anxiety scores to academy and field training performance of air traffic control specialists.

    DOT National Transportation Integrated Search

    1989-05-01

    State-trait anxiety scores were used prior to the 1981 strike of air traffic control specialists (ATCSs) to estimate perceived levels of job stress in field studies of this occupational group. The present study assessed the relationship between anxie...

  16. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  17. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  18. Congenital heart disease: interrelation between German diagnosis-related groups system and Aristotle complexity score.

    PubMed

    Sinzobahamvya, Nicodème; Photiadis, Joachim; Arenz, Claudia; Kopp, Thorsten; Hraska, Viktor; Asfour, Boulos

    2010-06-01

    The Disease-Related Groups (DRGs) system postulates that inpatient stays with similar levels of clinical complexity are expected to consume similar amounts of resources. This, applied to surgery of congenital heart disease, suggests that the higher the complexity of procedures as estimated by the Aristotle complexity score, the higher hospital reimbursement should be. This study analyses how much case-mix index (CMI) generated by German DRG 2009 version correlates with Aristotle score. A total of 456 DRG cases of year 2008 were regrouped according to German DRG 2009 and related cost-weight values and overall CMI evaluated. Corresponding Aristotle basic and comprehensive complexity scores (ABC and ACC) and levels were determined. Associated surgical performance (Aristotle score times hospital survival) was estimated. Spearman 'r' correlation coefficients were calculated between Aristotle scores and cost-weights. Goodness of fit 'r(2)' from derived regression was determined. Correlation was estimated to be optimal if Spearman 'r' and derived goodness of fit 'r(2)' approached 1 value. CMI was 8.787 while mean ABC and ACC scores were 7.64 and 9.27, respectively. Hospital survival was 98.5%: therefore, surgical performance attained 7.53 (ABC score) and 9.13 (ACC score). ABC and ACC scores and levels positively correlated with cost-weights. With Spearman 'r' of 1 and goodness of fit 'r(2)' of 0.9790, scores of the six ACC levels correlated at best. The equation was y = 0.5591 + 0.939x, in which y stands for cost-weight (CMI) and x for score of ACC level. ACC score correlates almost perfectly with corresponding cost-weights (CMI) generated by the German DRG 2009. It could therefore be used as the basis for hospital reimbursement to compensate in conformity with procedures' complexity. Extrapolated CMI in this series would be 9.264. Modulation of reimbursement according to surgical performance could be established and thus 'reward' quality in congenital heart surgery. Copyright 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  19. Predicting of biomass in Brazilian tropical dry forest: a statistical evaluation of generic equations.

    PubMed

    Lima, Robson B DE; Alves, Francisco T; Oliveira, Cinthia P DE; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Dry tropical forests are a key component in the global carbon cycle and their biomass estimates depend almost exclusively of fitted equations for multi-species or individual species data. Therefore, a systematic evaluation of statistical models through validation of estimates of aboveground biomass stocks is justifiable. In this study was analyzed the capacity of generic and specific equations obtained from different locations in Mexico and Brazil, to estimate aboveground biomass at multi-species levels and for four different species. Generic equations developed in Mexico and Brazil performed better in estimating tree biomass for multi-species data. For Poincianella bracteosa and Mimosa ophthalmocentra, only the Sampaio and Silva (2005) generic equation was the most recommended. These equations indicate lower tendency and lower bias, and biomass estimates for these equations are similar. For the species Mimosa tenuiflora, Aspidosperma pyrifolium and for the genus Croton the specific regional equations are more recommended, although the generic equation of Sampaio and Silva (2005) is not discarded for biomass estimates. Models considering gender, families, successional groups, climatic variables and wood specific gravity should be adjusted, tested and the resulting equations should be validated at both local and regional levels as well as on the scales of tropics with dry forest dominance.

  20. MEASUREMENTS OF THE IONISING RADIATION LEVEL AT A NUCLEAR MEDICINE FACILITY PERFORMING PET/CT EXAMINATIONS.

    PubMed

    Tulik, P; Kowalska, M; Golnik, N; Budzynska, A; Dziuk, M

    2017-05-01

    This paper presents the results of radiation level measurements at workplaces in a nuclear medicine facility performing PET/CT examinations. This study meticulously determines the staff radiation exposure in a PET/CT facility by tracking the path of patient movement. The measurements of the instantaneous radiation exposure were performed using an electronic radiometer with a proportional counter that was equipped with the option of recording the results on line. The measurements allowed for visualisation of the staff's instantaneous exposure caused by a patient walking through the department after the administration of 18F-FDG. An estimation of low doses associated with each working step and the exposure during a routine day in the department was possible. The measurements were completed by determining the average radiation level using highly sensitive thermoluminescent detectors. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  2. A comprehensive evaluation of two MODIS evapotranspiration products over the conterminous United States: using point and gridded FLUXNET and water balance ET

    USGS Publications Warehouse

    Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.

    2013-01-01

    Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.

  3. Asynchronous State Estimation for Discrete-Time Switched Complex Networks With Communication Constraints.

    PubMed

    Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li

    2018-05-01

    This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.

  4. Research in the application of spectral data to crop identification and assessment, volume 2

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T. (Principal Investigator); Hixson, M. M.; Bauer, M. E.

    1980-01-01

    The development of spectrometry crop development stage models is discussed with emphasis on models for corn and soybeans. One photothermal and four thermal meteorological models are evaluated. Spectral data were investigated as a source of information for crop yield models. Intercepted solar radiation and soil productivity are identified as factors related to yield which can be estimated from spectral data. Several techniques for machine classification of remotely sensed data for crop inventory were evaluated. Early season estimation, training procedures, the relationship of scene characteristics to classification performance, and full frame classification methods were studied. The optimal level for combining area and yield estimates of corn and soybeans is assessed utilizing current technology: digital analysis of LANDSAT MSS data on sample segments to provide area estimates and regression models to provide yield estimates.

  5. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  6. Estimating the dose response relationship for occupational radiation exposure measured with minimum detection level.

    PubMed

    Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y

    2004-10-01

    Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.

  7. Prediction of the air-water partition coefficient for perfluoro-2-methyl-3-pentanone using high-level Gaussian-4 composite theoretical methods.

    PubMed

    Rayne, Sierra; Forest, Kaya

    2014-09-19

    The air-water partition coefficient (Kaw) of perfluoro-2-methyl-3-pentanone (PFMP) was estimated using the G4MP2/G4 levels of theory and the SMD solvation model. A suite of 31 fluorinated compounds was employed to calibrate the theoretical method. Excellent agreement between experimental and directly calculated Kaw values was obtained for the calibration compounds. The PCM solvation model was found to yield unsatisfactory Kaw estimates for fluorinated compounds at both levels of theory. The HENRYWIN Kaw estimation program also exhibited poor Kaw prediction performance on the training set. Based on the resulting regression equation for the calibration compounds, the G4MP2-SMD method constrained the estimated Kaw of PFMP to the range 5-8 × 10(-6) M atm(-1). The magnitude of this Kaw range indicates almost all PFMP released into the atmosphere or near the land-atmosphere interface will reside in the gas phase, with only minor quantities dissolved in the aqueous phase as the parent compound and/or its hydrate/hydrate conjugate base. Following discharge into aqueous systems not at equilibrium with the atmosphere, significant quantities of PFMP will be present as the dissolved parent compound and/or its hydrate/hydrate conjugate base.

  8. Poverty and mortality among the elderly: measurement of performance in 33 countries 1960-92.

    PubMed

    Wang, J; Jamison, D T; Bos, E; Vu, M T

    1997-10-01

    This paper analyses the effect of income and education on life expectancy and mortality rates among the elderly in 33 countries for the period 1960-92 and assesses how that relationship has changed over time as a result of technical progress. Our outcome variables are life expectancy at age 60 and the probability of dying between age 60 and age 80 for both males and females. The data are from vital-registration based life tables published by national statistical offices for several years during this period. We estimate regressions with determinants that include GDP per capita (adjusted for purchasing power), education and time (as a proxy for technical progress). As the available measure of education failed to account for variation in life expectancy or mortality at age 60, our reported analyses focus on a simplified model with only income and time as predictors. The results indicate that, controlling for income, mortality rates among the elderly have declined considerably over the past three decades. We also find that poverty (as measured by low average income levels) explains some of the variation in both life expectancy at age 60 and mortality rates among the elderly across the countries in the sample. The explained amount of variation is more substantial for females than for males. While poverty does adversely affect mortality rates among the elderly (and the strength of this effect is estimated to be increasing over time), technical progress appears far more important in the period following 1960. Predicted female life expectancy (at age 60) in 1960 at the mean income level in 1960 was, for example 18.8 years; income growth to 1992 increased this by an estimated 0.7 years, whereas technical progress increased it by 2.0 years. We then use the estimated regression results to compare country performance on life expectancy of the elderly, controlling for levels of poverty (or income), and to assess how performance has varied over time. High performing countries, on female life expectancy at age 60, for the period around 1990, included Chile (1.0 years longer life expectancy), China (1.7 years longer), France (2.0 years longer), Japan (1.9 years longer), and Switzerland (1.3 years longer). Poorly performing countries included Denmark (1.1 years shorter life expectancy than predicted from income), Hungary (1.4 years shorter), Iceland (1.2 years shorter), Malaysia (1.6 years shorter), and Trinidad and Tobago (3.9 years shorter). Chile and Switzerland registered major improvements in relative performance over this period; Norway, Taiwan and the USA, in contrast showed major declines in performance between 1980 and the early 1990s.

  9. Subregional Nowcasts of Seasonal Influenza Using Search Trends.

    PubMed

    Kandula, Sasikiran; Hsu, Daniel; Shaman, Jeffrey

    2017-11-06

    Limiting the adverse effects of seasonal influenza outbreaks at state or city level requires close monitoring of localized outbreaks and reliable forecasts of their progression. Whereas forecasting models for influenza or influenza-like illness (ILI) are becoming increasingly available, their applicability to localized outbreaks is limited by the nonavailability of real-time observations of the current outbreak state at local scales. Surveillance data collected by various health departments are widely accepted as the reference standard for estimating the state of outbreaks, and in the absence of surveillance data, nowcast proxies built using Web-based activities such as search engine queries, tweets, and access of health-related webpages can be useful. Nowcast estimates of state and municipal ILI were previously published by Google Flu Trends (GFT); however, validations of these estimates were seldom reported. The aim of this study was to develop and validate models to nowcast ILI at subregional geographic scales. We built nowcast models based on autoregressive (autoregressive integrated moving average; ARIMA) and supervised regression methods (Random forests) at the US state level using regional weighted ILI and Web-based search activity derived from Google's Extended Trends application programming interface. We validated the performance of these methods using actual surveillance data for the 50 states across six seasons. We also built state-level nowcast models using state-level estimates of ILI and compared the accuracy of these estimates with the estimates of the regional models extrapolated to the state level and with the nowcast estimates published by GFT. Models built using regional ILI extrapolated to state level had a median correlation of 0.84 (interquartile range: 0.74-0.91) and a median root mean square error (RMSE) of 1.01 (IQR: 0.74-1.50), with noticeable variability across seasons and by state population size. Model forms that hypothesize the availability of timely state-level surveillance data show significantly lower errors of 0.83 (0.55-0.23). Compared with GFT, the latter model forms have lower errors but also lower correlation. These results suggest that the proposed methods may be an alternative to the discontinued GFT and that further improvements in the quality of subregional nowcasts may require increased access to more finely resolved surveillance data. ©Sasikiran Kandula, Daniel Hsu, Jeffrey Shaman. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.11.2017.

  10. Systemic estimation of the effect of photodynamic therapy of cancer

    NASA Astrophysics Data System (ADS)

    Kogan, Eugenia A.; Meerovich, Gennadii A.; Torshina, Nadezgda L.; Loschenov, Victor B.; Volkova, Anna I.; Posypanova, Anna M.

    1997-12-01

    The effects of photodynamic therapy (PDT) of cancer needs objective estimation and its unification in experimental as well as in clinical studies. They must include not only macroscopical changes but also the complex of following morphological criteria: (1) the level of direct tumor damage (direct necrosis and apoptosis); (2) the level of indirect tumor damage (ischemic necrosis); (3) the signs of vascular alterations; (4) the local and systemic antiblastome resistance; (5) the proliferative activity and malignant potential of survival tumor tissue. We have performed different regimes PDT using phthalocyanine derivatives. The complex of morphological methods (Ki-67, p53, c-myc, bcl-2) was used. Obtained results showed the connection of the tilted morphological criteria with tumor regression.

  11. Large Area Crop Inventory Experiment (LACIE). YES phase 1 yield feasibility report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The author has identified the following significant results. Each state model was separately evaluated to determine if a projected performance to the country level would satisfy a 90/90 criterion. All state models, except the North Dakota and Kansas models, satisfied that criterion both for district estimates aggregated to the state level and for state estimates directly from the models. In addition to the tests of the 90/90 criterion, the models were examined for their ability to adequately respond to fluctuations in weather. This portion of the analysis was based on a subjective interpretation of values of certain description statistics. As a result, 10 of the 12 models were judged to respond inadequately to variation in weather-related variables.

  12. Generic Sensor Modeling Using Pulse Method

    NASA Technical Reports Server (NTRS)

    Helder, Dennis L.; Choi, Taeyoung

    2005-01-01

    Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.

  13. Age estimation by pulp-to-tooth area ratio using cone-beam computed tomography: A preliminary analysis.

    PubMed

    Rai, Arpita; Acharya, Ashith B; Naikmasur, Venkatesh G

    2016-01-01

    Age estimation of living or deceased individuals is an important aspect of forensic sciences. Conventionally, pulp-to-tooth area ratio (PTR) measured from periapical radiographs have been utilized as a nondestructive method of age estimation. Cone-beam computed tomography (CBCT) is a new method to acquire three-dimensional images of the teeth in living individuals. The present study investigated age estimation based on PTR of the maxillary canines measured in three planes obtained from CBCT image data. Sixty subjects aged 20-85 years were included in the study. For each tooth, mid-sagittal, mid-coronal, and three axial sections-cementoenamel junction (CEJ), one-fourth root level from CEJ, and mid-root-were assessed. PTR was calculated using AutoCAD software after outlining the pulp and tooth. All statistical analyses were performed using an SPSS 17.0 software program. Linear regression analysis showed that only PTR in axial plane at CEJ had significant age correlation ( r = 0.32; P < 0.05). This is probably because of clearer demarcation of pulp and tooth outline at this level.

  14. Perceived Discrimination and Cognition in Older African Americans

    PubMed Central

    Barnes, L.L.; Lewis, T.T.; Begeny, C.T.; Yu, L.; Bennett, D.A.; Wilson, R.S.

    2012-01-01

    Existing evidence suggests that psychosocial stress is associated with cognitive impairment in older adults. Perceived discrimination is a persistent stressor in African Americans that has been associated with several adverse mental and physical health outcomes. To our knowledge, the association of discrimination with cognition in older African Americans has not been examined. In a cohort of 407 older African Americans without dementia (mean age = 72.9; SD = 6.4), we found that a higher level of perceived discrimination was related to poorer cognitive test performance, particularly episodic memory (estimate = −0.03; SE = .013; p < .05) and perceptual speed tests (estimate = −0.04; SE = .015; p < .05). The associations were unchanged after adjusting for demographics and vascular risk factors, but were attenuated after adjustment for depressive symptoms (Episodic memory estimate = −0.02; SE = 0.01; Perceptual speed estimate = −0.03; SE = 0.02; both p’s = .06). The association between discrimination and several cognitive domains was modified by level of neuroticism. The results suggest that perceived discrimination may be associated with poorer cognitive function, but does not appear to be independent of depressive symptoms. PMID:22595035

  15. The Earthquake Early Warning System In Southern Italy: Performance Tests And Next Developments

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Elia, L.; Martino, C.; Colombelli, S.; Emolo, A.; Festa, G.; Iannaccone, G.

    2011-12-01

    PRESTo (PRobabilistic and Evolutionary early warning SysTem) is the software platform for Earthquake Early Warning (EEW) in Southern Italy, that integrates recent algorithms for real-time earthquake location, magnitude estimation and damage assessment, into a highly configurable and easily portable package. The system is under active experimentation based on the Irpinia Seismic Network (ISNet). PRESTo processes the live streams of 3C acceleration data for P-wave arrival detection and, while an event is occurring, promptly performs event detection and provides location, magnitude estimations and peak ground shaking predictions at target sites. The earthquake location is obtained by an evolutionary, real-time probabilistic approach based on an equal differential time formulation. At each time step, it uses information from both triggered and not-yet-triggered stations. Magnitude estimation exploits an empirical relationship that correlates it to the filtered Peak Displacement (Pd), measured over the first 2-4 s of P-signal. Peak ground-motion parameters at any distance can be finally estimated by ground motion prediction equations. Alarm messages containing the updated estimates of these parameters can thus reach target sites before the destructive waves, enabling automatic safety procedures. Using the real-time data streaming from the ISNet network, PRESTo has produced a bulletin for about a hundred low-magnitude events occurred during last two years. Meanwhile, the performances of the EEW system were assessed off-line playing-back the records for moderate and large events from Italy, Spain and Japan and synthetic waveforms for large historical events in Italy. These tests have shown that, when a dense seismic network is deployed in the fault area, PRESTo produces reliable estimates of earthquake location and size within 5-6 s from the event origin time (To). Estimates are provided as probability density functions whose uncertainty typically decreases with time, obtaining a stable solution within 10 s from To. The regional approach was recently integrated with a threshold-based early warning method for the definition of alert levels and the estimation of the Potential Damaged Zone (PDZ) in which the highest intensity levels are expected. The dominant period Tau_c and the peak displacement (Pd) are simultaneously measured in a 3s window after the first P-arrival time. Pd and Tau_c are then compared with threshold values, previously established through an empirical regression analysis, that define a decisional table with four alert levels. According to the real-time measured values of Pd and tau_c, each station provides a local alert level that can be used to warn distant sites and to define the extent of the PDZ. The integrated system was validated off-line for the M6.3, 2009 Central Italy earthquake and ten large Japanese events, due to the low-magnitude events currently occurring in Irpinia. The results confirmed the feasibility and the robustness of such an approach, providing reliable predictions of the earthquake damaging effects, that is a relevant information for the efficient planning of the rescue operations in the immediate post-event emergency phase.

  16. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    NASA Astrophysics Data System (ADS)

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen

    2016-08-01

    Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.

  17. PyCoTools: A Python Toolbox for COPASI.

    PubMed

    Welsh, Ciaran M; Fullard, Nicola; Proctor, Carole J; Martinez-Guimera, Alvaro; Isfort, Robert J; Bascom, Charles C; Tasseff, Ryan; Przyborski, Stefan A; Shanley, Daryl P

    2018-05-22

    COPASI is an open source software package for constructing, simulating and analysing dynamic models of biochemical networks. COPASI is primarily intended to be used with a graphical user interface but often it is desirable to be able to access COPASI features programmatically, with a high level interface. PyCoTools is a Python package aimed at providing a high level interface to COPASI tasks with an emphasis on model calibration. PyCoTools enables the construction of COPASI models and the execution of a subset of COPASI tasks including time courses, parameter scans and parameter estimations. Additional 'composite' tasks which use COPASI tasks as building blocks are available for increasing parameter estimation throughput, performing identifiability analysis and performing model selection. PyCoTools supports exploratory data analysis on parameter estimation data to assist with troubleshooting model calibrations. We demonstrate PyCoTools by posing a model selection problem designed to show case PyCoTools within a realistic scenario. The aim of the model selection problem is to test the feasibility of three alternative hypotheses in explaining experimental data derived from neonatal dermal fibroblasts in response to TGF-β over time. PyCoTools is used to critically analyse the parameter estimations and propose strategies for model improvement. PyCoTools can be downloaded from the Python Package Index (PyPI) using the command 'pip install pycotools' or directly from GitHub (https://github.com/CiaranWelsh/pycotools). Documentation at http://pycotools.readthedocs.io. Supplementary data are available at Bioinformatics.

  18. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  19. The acoustics of a small-scale helicopter rotor in hover

    NASA Technical Reports Server (NTRS)

    Kitaplioglu, Cahit

    1989-01-01

    A 2.1 m diameter, 1/6-scale model helicopter main rotor was tested in hover in the test section of the NASA Ames 40- by 80-foot wind tunnel. Performance and noise data on a small-scale rotor at various thrust coefficients and tip Mach numbers were obtained for comparison with existing data on similar full-scale helicopter rotors. These data form part of a data base to permit the estimation of scaling effects on various rotor noise mechanisms. Another objective was to contribute to a data base that will permit the estimation of facility effects on acoustic testing. Acoustic 1/3-octave-band spectra are presented, together with variations of overall acoustic levels with rotor performance, microphone distance, and directivity angle.

  20. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Letschert, Virginie E.; Bojda, Nicholas; Ke, Jing

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programsmore » while still saving consumers money?« less

  1. Propulsive efficiency of the underwater dolphin kick in humans.

    PubMed

    von Loebbecke, Alfred; Mittal, Rajat; Fish, Frank; Mark, Russell

    2009-05-01

    Three-dimensional fully unsteady computational fluid dynamic simulations of five Olympic-level swimmers performing the underwater dolphin kick are used to estimate the swimmer's propulsive efficiencies. These estimates are compared with those of a cetacean performing the dolphin kick. The geometries of the swimmers and the cetacean are based on laser and CT scans, respectively, and the stroke kinematics is based on underwater video footage. The simulations indicate that the propulsive efficiency for human swimmers varies over a relatively wide range from about 11% to 29%. The efficiency of the cetacean is found to be about 56%, which is significantly higher than the human swimmers. The computed efficiency is found not to correlate with either the slender body theory or with the Strouhal number.

  2. Interplanetary laser ranging - an emerging technology for planetary science missions

    NASA Astrophysics Data System (ADS)

    Dirkx, D.; Vermeersen, L. L. A.

    2012-09-01

    Interplanetary laser ranging (ILR) is an emerging technology for very high accuracy distance determination between Earth-based stations and spacecraft or landers at interplanetary distances. It has evolved from laser ranging to Earth-orbiting satellites, modified with active laser transceiver systems at both ends of the link instead of the passive space-based retroreflectors. It has been estimated that this technology can be used for mm- to cm-level accuracy range determination at interplanetary distances [2, 7]. Work is being performed in the ESPaCE project [6] to evaluate in detail the potential and limitations of this technology by means of bottom-up laser link simulation, allowing for a reliable performance estimate from mission architecture and hardware characteristics.

  3. Advanced photovoltaic solar array development

    NASA Technical Reports Server (NTRS)

    Kurland, Richard M.; Stella, Paul

    1989-01-01

    Phase 2 of the Advanced Photovoltaic Solar Array (APSA) program, started in mid-1987, is currently in progress to fabricate prototype wing hardware that will lead to wing integration and testing in 1989. The design configuration and key details are reviewed. A status of prototype hardware fabricated to date is provided. Results from key component-level tests are discussed. Revised estimates of array-level performance as a function of solar cell device technology for geosynchronous missions are given.

  4. A system level model for preliminary design of a space propulsion solid rocket motor

    NASA Astrophysics Data System (ADS)

    Schumacher, Daniel M.

    Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.

  5. SAR target recognition and posture estimation using spatial pyramid pooling within CNN

    NASA Astrophysics Data System (ADS)

    Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-01-01

    Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.

  6. Estimation of exciton reverse transfer for variable spectra and high efficiency in interlayer-based organic light-emitting devices

    NASA Astrophysics Data System (ADS)

    Liu, Shengqiang; Zhao, Juan; Huang, Jiang; Yu, Junsheng

    2016-12-01

    Organic light-emitting devices (OLEDs) with three different exciton adjusting interlayers (EALs), which are inserted between two complementary blue and yellow emitting layers, are fabricated to demonstrate the relationship between the EAL and device performance. The results show that the variations of type and thickness of EAL have different adjusting capability and distribution control on excitons. However, we also find that the reverse Dexter transfer of triplet exciton from the light-emitting layer to the EAL is an energy loss path, which detrimentally affects electroluminescent (EL) spectral performance and device efficiency in different EAL-based devices. Based on exciton distribution and integration, an estimation of exciton reverse transfer is developed through a triplet energy level barrier to simulate the exciton behavior. Meanwhile, the estimation results also demonstrate the relationship between the EAL and device efficiency by a parameter of exciton reverse transfer probability. The estimation of exciton reverse transfer discloses a crucial role of the EALs in the interlayer-based OLEDs to achieve variable EL spectra and high efficiency.

  7. Development of an Advanced Grid-Connected PV-ECS System Considering Solar Energy Estimation

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Habibur; Yamashiro, Susumu; Nakamura, Koichi

    In this paper, the development and the performance of a viable distributed grid-connected power generation system of Photovoltaic-Energy Capacitor System (PV-ECS) considering solar energy estimation have been described. Instead of conventional battery Electric Double Layer Capacitors (EDLC) are used as storage device and Photovoltaic (PV) panel to generate power from solar energy. The system can generate power by PV, store energy when the demand of load is low and finally supply the stored energy to load during the period of peak demand. To realize the load leveling function properly the system will also buy power from grid line when load demand is high. Since, the power taken from grid line depends on the PV output power, a procedure has been suggested to estimate the PV output power by calculating solar radiation. In order to set the optimum value of the buy power, a simulation program has also been developed. Performance of the system has been studied for different load patterns in different weather conditions by using the estimated PV output power with the help of the simulation program.

  8. Co-occurrences Between Adolescent Substance Use and Academic Performance: School Context Inuences a Multilevel-Longitudinal Perspective

    PubMed Central

    Andrade, Fernando H.

    2014-01-01

    A growing body of literature has linked substance use and academic performance exploring substance use as a predictor of academic performance or vice versa. This study uses a different approach conceptualizing substance use and academic performance as parallel outcomes and exploring two topics: its multilevel-longitudinal association and school contextual effects on both outcomes. Using multilevel Confirmatory Factor Analysis and multilevel-longitudinal analyses, the empirical estimates relied on 7843 students nested in 114 schools (Add Health study). The main finding suggests that the correlation between substance use and academic performance was positive at the school level in contraposition to the negative relationship at the individual level. Additional findings suggest a positive effect of a school risk factor on substance use and a positive effect of academic pressure on academic performance. These findings represent a contribution to our understanding of how schools could affect the relationship between academic performance and substance use. PMID:25057764

  9. Comparison of sea surface flux measured by instrumented aircraft and ship during SOFIA and SEMAPHORE experiments

    NASA Astrophysics Data System (ADS)

    Durand, Pierre; Dupuis, HéLèNe; Lambert, Dominique; BéNech, Bruno; Druilhet, Aimé; Katsaros, Kristina; Taylor, Peter K.; Weill, Alain

    1998-10-01

    Two major campaigns (Surface of the Oceans, Fluxes and Interactions with the Atmosphere (SOFIA) and Structure des Echanges Mer-Atmosphère, Propriétés des Hétérogénéités Océaniques: Recherche Expérimentale (SEMAPHORE)) devoted to the study of ocean-atmosphere interaction were conducted in 1992 and 1993, respectively, in the Azores region. Among the various platforms deployed, instrumented aircraft and ship allowed the measurement of the turbulent flux of sensible heat, latent heat, and momentum. From coordinated missions we can evaluate the sea surface fluxes from (1) bulk relations and mean measurements performed aboard the ship in the atmospheric surface layer and (2) turbulence measurements aboard aircraft, which allowed the flux profiles to be estimated through the whole atmospheric boundary layer and therefore to be extrapolated toward the sea surface level. Continuous ship fluxes were calculated with bulk coefficients deduced from inertial-dissipation measurements in the same experiments, whereas aircraft fluxes were calculated with eddy-correlation technique. We present a comparison between these two estimations. Although momentum flux agrees quite well, aircraft estimations of sensible and latent heat flux are lower than those of the ship. This result is surprising, since aircraft momentum flux estimates are often considered as much less accurate than scalar flux estimates. The various sources of errors on the aircraft and ship flux estimates are discussed. For sensible and latent heat flux, random errors on aircraft estimates, as well as variability of ship flux estimates, are lower than the discrepancy between the two platforms, whereas the momentum flux estimates cannot be considered as significantly different. Furthermore, the consequence of the high-pass filtering of the aircraft signals on the flux values is analyzed; it is weak at the lowest altitudes flown and cannot therefore explain the discrepancies between the two platforms but becomes considerable at upper levels in the boundary layer. From arguments linked to the imbalance of the surface energy budget, established during previous campaigns performed over land surfaces with aircraft, we conclude that aircraft heat fluxes are probably also underestimated over the sea.

  10. Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model

    DTIC Science & Technology

    2016-11-01

    Added by Decreased Grid Spacing 14 4.4 Performance Comparison of 2 WRE–N Configurations 18 4.5 Performance Comparison: Dumais WRE–N with FDDA vs. the...FDDA for 2 -m-AGL TMP (K) ..................................................... 15 Fig. 11 Bias and RMSE errors for the 3 grids for Dumais and Passner...WRE–N with FDDA for 2 -m-AGL DPT (K) ...................................................... 16 Fig. 12 Bias and RMSE errors for the 3 grids for Dumais

  11. Training Transfer: Not the 10% Solution

    ERIC Educational Resources Information Center

    Farrington, Jeanne

    2011-01-01

    Training transfer is a key concern for organizational stakeholders, training professionals, and researchers. Since Georgenson's (1982) article on transfer, his conversational gambit that only 10% of training transfers to performance on the job has been quoted often. This estimate suggests a low level of success with training programs in general,…

  12. Logistics Company Carrier Partner 2.0.15 Tool: Technical Documentation 2015 Data Year - United States Version

    EPA Pesticide Factsheets

    This SmartWay Logistics 2.0.15 Tool is intended to help logistics companies estimate and assess their emission performance levels as well as their total emissions associated with goods movement in the U.S. freight rail, barge, air and t

  13. A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers

    EPA Science Inventory

    This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology costs and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO...

  14. Health Literacy among Adults: A Study from Turkey

    ERIC Educational Resources Information Center

    Ozdemir, H.; Alper, Z.; Uncu, Y.; Bilgel, N.

    2010-01-01

    Patients' health literacy is increasingly recognized as a critical factor affecting health communication and outcomes. We performed this study to assess the levels of health literacy by using Rapid Estimate of Adult Literacy in Medicine (REALM) and Newest Vital Sign (NVS) instruments. Patients (n = 456) at a family medicine clinic completed…

  15. Schooling Quality in Eastern Europe: Educational Production during Transition

    ERIC Educational Resources Information Center

    Ammermuller, A.; Heijke, H.; Woszmann, L.

    2005-01-01

    This paper uses student-level Third International Mathematics and Science Study (TIMSS) data to analyze the determinants of schooling quality for seven Eastern European transition countries by estimating educational production functions. The results show substantial effects of student background on educational performance and a much lower impact…

  16. On predicting contamination levels of HALOE optics aboard UARS using direct simulation Monte Carlo

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1993-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flowfield and surface conditions and geometric orientations in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. Problems resolving species outgassing and vent flux rates that varied over many orders of magnitude were handled using species weighting factors. Results relating to contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface are presented, along with data related to code performance. Using procedures developed in standard contamination analyses, the cumulative level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated to be about 2700A.

  17. Estimating annual suspended-sediment loads in the northern and central Appalachian Coal region

    USGS Publications Warehouse

    Koltun, G.F.

    1985-01-01

    Multiple-regression equations were developed for estimating the annual suspended-sediment load, for a given year, from small to medium-sized basins in the northern and central parts of the Appalachian coal region. The regression analysis was performed with data for land use, basin characteristics, streamflow, rainfall, and suspended-sediment load for 15 sites in the region. Two variables, the maximum mean-daily discharge occurring within the year and the annual peak discharge, explained much of the variation in the annual suspended-sediment load. Separate equations were developed employing each of these discharge variables. Standard errors for both equations are relatively large, which suggests that future predictions will probably have a low level of precision. This level of precision, however, may be acceptable for certain purposes. It is therefore left to the user to asses whether the level of precision provided by these equations is acceptable for the intended application.

  18. Surgical waste audit of 5 total knee arthroplasties

    PubMed Central

    Stall, Nathan M.; Kagoma, Yoan K.; Bondy, Jennifer N.; Naudie, Douglas

    2013-01-01

    Background Operating rooms (ORs) are estimated to generate up to one-third of hospital waste. At the London Health Sciences Centre, prosthetics and implants represent 17% of the institution’s ecological footprint. To investigate waste production associated with total knee arthroplasties (TKAs), we performed a surgical waste audit to gauge the environmental impact of this procedure and generate strategies to improve waste management. Methods We conducted a waste audit of 5 primary TKAs performed by a single surgeon in February 2010. Waste was categorized into 6 streams: regular solid waste, recyclable plastics, biohazard waste, laundered linens, sharps and blue sterile wrap. Volume and weight of each stream was quantified. We used Canadian Joint Replacement Registry data (2008–2009) to estimate annual weight and volume totals of waste from all TKAs performed in Canada. Results The average surgical waste (excluding laundered linens) per TKA was 13.3 kg, of which 8.6 kg (64.5%) was normal solid waste, 2.5 kg (19.2%) was biohazard waste, 1.6 kg (12.1%) was blue sterile wrap, 0.3 kg (2.2%) was recyclables and 0.3 kg (2.2%) was sharps. Plastic wrappers, disposable surgical linens and personal protective equipment contributed considerably to total waste. We estimated that landfill waste from all 47 429 TKAs performed in Canada in 2008–2009 was 407 889 kg by weight and 15 272 m3 by volume. Conclusion Total knee arthroplasties produce substantial amounts of surgical waste. Environmentally friendly surgical products and waste management strategies may allow ORs to reduce the negative impacts of waste production without compromising patient care. Level of evidence Level IV, case series. PMID:23351497

  19. Constellation Program Life-cycle Cost Analysis Model (LCAM)

    NASA Technical Reports Server (NTRS)

    Prince, Andy; Rose, Heidi; Wood, James

    2008-01-01

    The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.

  20. The empirical Bayes estimators of fine-scale population structure in high gene flow species.

    PubMed

    Kitada, Shuichi; Nakamichi, Reiichiro; Kishino, Hirohisa

    2017-11-01

    An empirical Bayes (EB) pairwise F ST estimator was previously introduced and evaluated for its performance by numerical simulation. In this study, we conducted coalescent simulations and generated genetic population structure mechanistically, and compared the performance of the EBF ST with Nei's G ST , Nei and Chesser's bias-corrected G ST (G ST_NC ), Weir and Cockerham's θ (θ WC ) and θ with finite sample correction (θ WC_F ). We also introduced EB estimators for Hedrick' G' ST and Jost' D. We applied these estimators to publicly available SNP genotypes of Atlantic herring. We also examined the power to detect the environmental factors causing the population structure. Our coalescent simulations revealed that the finite sample correction of θ WC is necessary to assess population structure using pairwise F ST values. For microsatellite markers, EBF ST performed the best among the present estimators regarding both bias and precision under high gene flow scenarios (FST≤0.032). For 300 SNPs, EBF ST had the highest precision in all cases, but the bias was negative and greater than those for G ST_NC and θ WC_F in all cases. G ST_NC and θ WC_F performed very similarly at all levels of F ST . As the number of loci increased up to 10 000, the precision of G ST_NC and θ WC_F became slightly better than for EBF ST for cases with FST≥0.004, even though the size of the bias remained constant. The EB estimators described the fine-scale population structure of the herring and revealed that ~56% of the genetic differentiation was caused by sea surface temperature and salinity. The R package finepop for implementing all estimators used here is available on CRAN. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  1. Noise levels from a model turbofan engine with simulated noise control measures applied

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Woodward, Richard P.

    1993-01-01

    A study of estimated full-scale noise levels based on measured levels from the Advanced Ducted Propeller (ADP) sub-scale model is presented. Testing of this model was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. Effective Perceived Noise Level (EPNL) estimates for the baseline configuration are documented, and also used as the control case in a study of the potential benefits of two categories of noise control. The effect of active noise control is evaluated by artificially removing various rotor-stator interaction tones. Passive noise control is simulated by applying a notch filter to the wind tunnel data. Cases with both techniques are included to evaluate hybrid active-passive noise control. The results for EPNL values are approximate because the original source data was limited in bandwidth and in sideline angular coverage. The main emphasis is on comparisons between the baseline and configurations with simulated noise control measures.

  2. Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?

    PubMed

    Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle

    2018-03-21

    The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual mCPR tracking estimates for FP2020. © Magnani et al.

  3. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  4. Investigations on the performance of chevron type plate heat exchangers

    NASA Astrophysics Data System (ADS)

    Dutta, Oruganti Yaga; Nageswara Rao, B.

    2018-01-01

    This paper presents empirical relations for the chevron type plate heat exchangers (PHEs) and demonstrated their validity through comparison of test data of PHEs. In order to examine the performance of PHEs, the pressure drop(Δ P), the overall heat transfer coefficient ( U m ) and the effectiveness ( ɛ) are estimated by considering the properties of plate material and working fluid, number of plates ( N t ) and chevron angle( β). It is a known fact that, large surface area of the plate provides more rate of heat transfer ( \\dot{Q} ) thereby more effectiveness ( ɛ). However, there is a possibility to achieve the required performance by increasing the number of plates without altering the plate dimensions, which avoids the new design of the system. Application of the Taguchi's design of experiments is examined with less number of experiments and demonstrated by setting the levels for the parameters and compared the test data with the estimated output responses.

  5. Dynamic estimator for determining operating conditions in an internal combustion engine

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  6. Estimating diabetes prevalence by small area in England.

    PubMed

    Congdon, Peter

    2006-03-01

    Diabetes risk is linked to both deprivation and ethnicity, and so prevalence will vary considerably between areas. Prevalence differences may partly account for geographic variation in health performance indicators for diabetes, which are based on age standardized hospitalization or operation rates. A positive correlation between prevalence and health outcomes indicates that the latter are not measuring only performance. A regression analysis of prevalence rates according to age, sex and ethnicity from the Health Survey for England (HSE) is undertaken and used (together with census data) to estimate diabetes prevalence for 354 English local authorities and 8000 smaller areas (electoral wards). An adjustment for social factors is based on a prevalence gradient over area-deprivation quintiles. A Bayesian estimation approach is used allowing simple inclusion of evidence on prevalence from other or historical sources. The estimated prevalent population in England is 1.5 million (188 000 type 1 and 1.341 million type 2). At strategic health authority (StHA) level, prevalence varies from 2.4 (Thames Valley) to 4 per cent (North East London). The prevalence estimates are used to assess variations between local authorities in adverse hospitalization indicators for diabetics and to assess the relationship between diabetes-related mortality and prevalence. In particular, rates of diabetic ketoacidosis (DKA) and coma are positively correlated with prevalence, while diabetic amputation rates are not. The methodology developed is applicable to developing small-area-prevalence estimates for a range of chronic diseases, when health surveys assess prevalence by demographic categories. In the application to diabetes prevalence, there is evidence that performance indicators as currently calculated are not corrected for prevalence.

  7. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  8. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  9. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    PubMed

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.

  10. Estimating the risks of cancer mortality and genetic defects resulting from exposures to low levels of ionizing radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buhl, T.E.; Hansen, W.R.

    1984-05-01

    Estimators for calculating the risk of cancer and genetic disorders induced by exposure to ionizing radiation have been recommended by the US National Academy of Sciences Committee on the Biological Effects of Ionizing Radiations, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Committee on Radiological Protection. These groups have also considered the risks of somatic effects other than cancer. The US National Council on Radiation Protection and Measurements has discussed risk estimate procedures for radiation-induced health effects. The recommendations of these national and international advisory committees are summarized and compared in this report. Based onmore » this review, two procedures for risk estimation are presented for use in radiological assessments performed by the US Department of Energy under the National Environmental Policy Act of 1969 (NEPA). In the first procedure, age- and sex-averaged risk estimators calculated with US average demographic statistics would be used with estimates of radiation dose to calculate the projected risk of cancer and genetic disorders that would result from the operation being reviewed under NEPA. If more site-specific risk estimators are needed, and the demographic information is available, a second procedure is described that would involve direct calculation of the risk estimators using recommended risk-rate factors. The computer program REPCAL has been written to perform this calculation and is described in this report. 25 references, 16 tables.« less

  11. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  12. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.

    PubMed

    Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G

    2018-01-01

    Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

  13. WREP: A wavelet-based technique for extracting the red edge position from reflectance spectra for estimating leaf and canopy chlorophyll contents of cereal crops

    NASA Astrophysics Data System (ADS)

    Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing

    2017-07-01

    Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.

  14. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    PubMed Central

    Xu, Nan; Spreng, R. Nathan; Doerschuk, Peter C.

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain. PMID:28559793

  15. Estimation of genetic parameters and breeding values across challenged environments to select for robust pigs.

    PubMed

    Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A

    2015-04-01

    Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments resulted in a sharp decline in productivity as the level of challenge increased. In contrast, selection using the random regression approach resulted in limited change in productivity with increasing levels of challenge. Hence, we demonstrate that the use of a quantitative measure of environmental CL and a random regression approach can be comprehensively combined for genetic selection of pigs with enhanced ability to maintain high productivity in harsh environments.

  16. Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.

    PubMed

    Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip

    2015-11-01

    The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.

  17. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  18. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  19. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    NASA Astrophysics Data System (ADS)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.

  20. Simultaneous assessment of blood coagulation and hematocrit levels in dielectric blood coagulometry.

    PubMed

    Hayashi, Yoshihito; Brun, Marc-Aurèle; Machida, Kenzo; Lee, Seungmin; Murata, Aya; Omori, Shinji; Uchiyama, Hidetoshi; Inoue, Yoshinori; Kudo, Toshifumi; Toyofuku, Takahiro; Nagasawa, Masayuki; Uchimura, Isao; Nakamura, Tomomasa; Muneta, Takeshi

    2017-01-01

    In a whole blood coagulation test, the concentration of any in vitro diagnostic agent in plasma is dependent on the hematocrit level but its impact on the test result is unknown. The aim of this work was to clarify the effects of reagent concentration, particularly Ca2+, and to find a method for hematocrit estimation compatible with the coagulation test. Whole blood coagulation tests by dielectric blood coagulometry (DBCM) and rotational thromboelastometry were performed with various concentrations of Ca2+ or on samples with different hematocrit levels. DBCM data from a previous clinical study of patients who underwent total knee arthroplasty were re-analyzed. Clear Ca2+ concentration and hematocrit level dependences of the characteristic times of blood coagulation were observed. Rouleau formation made hematocrit estimation difficult in DBCM, but use of permittivity at around 3 MHz made it possible. The re-analyzed clinical data showed a good correlation between permittivity at 3 MHz and hematocrit level (R2=0.83). Changes in the hematocrit level may affect whole blood coagulation tests. DBCM has the potential to overcome this effect with some automated correction using results from simultaneous evaluations of the hematocrit level and blood coagulability.

Top