Sample records for absolute deviation scad

  1. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  2. A novel collaborative representation and SCAD based classification method for fibrosis and inflammatory activity analysis of chronic hepatitis C

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Chen, Tingting; Li, Yan; Zhu, Nenghui; Qiu, Xuan

    2018-03-01

    In order to analysis the fibrosis stage and inflammatory activity grade of chronic hepatitis C, a novel classification method based on collaborative representation (CR) with smoothly clipped absolute deviation penalty (SCAD) penalty term, called CR-SCAD classifier, is proposed for pattern recognition. After that, an auto-grading system based on CR-SCAD classifier is introduced for the prediction of fibrosis stage and inflammatory activity grade of chronic hepatitis C. The proposed method has been tested on 123 clinical cases of chronic hepatitis C based on serological indexes. Experimental results show that the performance of the proposed method outperforms the state-of-the-art baselines for the classification of fibrosis stage and inflammatory activity grade of chronic hepatitis C.

  3. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  4. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  5. Sparse Logistic Regression for Diagnosis of Liver Fibrosis in Rat by Using SCAD-Penalized Likelihood

    PubMed Central

    Yan, Fang-Rong; Lin, Jin-Guan; Liu, Yu

    2011-01-01

    The objective of the present study is to find out the quantitative relationship between progression of liver fibrosis and the levels of certain serum markers using mathematic model. We provide the sparse logistic regression by using smoothly clipped absolute deviation (SCAD) penalized function to diagnose the liver fibrosis in rats. Not only does it give a sparse solution with high accuracy, it also provides the users with the precise probabilities of classification with the class information. In the simulative case and the experiment case, the proposed method is comparable to the stepwise linear discriminant analysis (SLDA) and the sparse logistic regression with least absolute shrinkage and selection operator (LASSO) penalty, by using receiver operating characteristic (ROC) with bayesian bootstrap estimating area under the curve (AUC) diagnostic sensitivity for selected variable. Results show that the new approach provides a good correlation between the serum marker levels and the liver fibrosis induced by thioacetamide (TAA) in rats. Meanwhile, this approach might also be used in predicting the development of liver cirrhosis. PMID:21716672

  6. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    PubMed

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus

    NASA Astrophysics Data System (ADS)

    Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.

    2017-11-01

    The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.

  8. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  9. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  11. Robust variable selection method for nonparametric differential equation models with application to nonlinear dynamic gene regulatory network analysis.

    PubMed

    Lu, Tao

    2016-01-01

    The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.

  12. Diurnal Transcriptome and Gene Network Represented through Sparse Modeling in Brachypodium distachyon.

    PubMed

    Koda, Satoru; Onda, Yoshihiko; Matsui, Hidetoshi; Takahagi, Kotaro; Yamaguchi-Uehara, Yukiko; Shimizu, Minami; Inoue, Komaki; Yoshida, Takuhiro; Sakurai, Tetsuya; Honda, Hiroshi; Eguchi, Shinto; Nishii, Ryuei; Mochida, Keiichi

    2017-01-01

    We report the comprehensive identification of periodic genes and their network inference, based on a gene co-expression analysis and an Auto-Regressive eXogenous (ARX) model with a group smoothly clipped absolute deviation (SCAD) method using a time-series transcriptome dataset in a model grass, Brachypodium distachyon . To reveal the diurnal changes in the transcriptome in B. distachyon , we performed RNA-seq analysis of its leaves sampled through a diurnal cycle of over 48 h at 4 h intervals using three biological replications, and identified 3,621 periodic genes through our wavelet analysis. The expression data are feasible to infer network sparsity based on ARX models. We found that genes involved in biological processes such as transcriptional regulation, protein degradation, and post-transcriptional modification and photosynthesis are significantly enriched in the periodic genes, suggesting that these processes might be regulated by circadian rhythm in B. distachyon . On the basis of the time-series expression patterns of the periodic genes, we constructed a chronological gene co-expression network and identified putative transcription factors encoding genes that might be involved in the time-specific regulatory transcriptional network. Moreover, we inferred a transcriptional network composed of the periodic genes in B. distachyon , aiming to identify genes associated with other genes through variable selection by grouping time points for each gene. Based on the ARX model with the group SCAD regularization using our time-series expression datasets of the periodic genes, we constructed gene networks and found that the networks represent typical scale-free structure. Our findings demonstrate that the diurnal changes in the transcriptome in B. distachyon leaves have a sparse network structure, demonstrating the spatiotemporal gene regulatory network over the cyclic phase transitions in B. distachyon diurnal growth.

  13. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  14. Polyarterial clustered recurrence of cervical artery dissection seems to be the rule.

    PubMed

    Dittrich, R; Nassenstein, I; Bachmann, R; Maintz, D; Nabavi, D G; Heindel, W; Kuhlenbäumer, G; Ringelstein, E B

    2007-07-10

    Spontaneous cervical artery dissection (sCAD) in multiple neck arteries (polyarterial sCAD) is traditionally thought to represent a monophasic disorder suggesting nearly simultaneous occurrence of the various intramural hematomas. Its incidence ranges from 10 to 28%. The recurrence rate of sCAD in general over up to 8.6 years has been recorded to be 0 to 8%. To analyze more precisely the temporal and spatial neuroangiologic course of sCAD with particular focus on polyarterial manifestation. We prospectively investigated 36 consecutive patients with sCAD unexceptionally proven by MR imaging at 1.5 T. We reinvestigated these patients by two follow-up MR examinations. The first follow-up MR examination was performed after a mean of 16 +/- 13 days, and the last MR study after a mean of 7 +/- 2 months after the initial diagnosis. Systematic data evaluation of the 36 patients revealed the following phenomena of sCAD: 1) seemingly simultaneous polyarterial sCAD on the initial MRI scan (n = 2; 6%); 2) recurrent sCAD in one or several initially uninvolved cervical arteries during follow-up (n = 9; 25%). These latter sCAD occurred as an early polyarterial recurrent event within 1 to 4 weeks in 7 patients (19%), and as a delayed polyarterial recurrent event within 5 to 7 months in 2 patients (6%). Under a spatial perspective, sCAD recurrence took place in one additional cervical artery in 5 patients (14%), or in more than one previously uninvolved cervical artery in 4 patients (11%). All patients except one with sCAD recurrence remained asymptomatic or had local symptoms only. One patient experienced a significant clinical deterioration due to ischemic stroke with acute impairment of cerebral hemodynamics. During follow-up, patients received transient oral anticoagulation for at least 6 months with subsequent acetylsalicylic acid (ASA). More often than previously thought, the recurrence of spontaneous cervical artery dissection (sCAD) involves multiple cervical arteries in sequence. sCAD recurrence frequently appears to cluster within the first 2 months after the index event, rather than occurring steadily over time. The prognosis of recurring sCAD appears benign, particularly in patients already receiving antithrombotic therapy.

  15. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    PubMed

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'.We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets.

  16. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data

    PubMed Central

    2011-01-01

    Background Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net. We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone. Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Results Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error. Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. Conclusions The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters. The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'. We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets. PMID:21554689

  17. Spontaneous coronary artery dissection: association with predisposing arteriopathies and precipitating stressors and cardiovascular outcomes.

    PubMed

    Saw, Jacqueline; Aymong, Eve; Sedlak, Tara; Buller, Christopher E; Starovoytov, Andrew; Ricci, Donald; Robinson, Simon; Vuurmans, Tycho; Gao, Min; Humphries, Karin; Mancini, G B John

    2014-10-01

    Nonatherosclerotic spontaneous coronary artery dissection (NA-SCAD) is underdiagnosed and an important cause of myocardial infarction in young women. The frequency of predisposing and precipitating conditions and cardiovascular outcomes remains poorly described. Patients with NA-SCAD prospectively evaluated (retrospectively or prospectively identified) at Vancouver General Hospital were included. Angiographic SCAD diagnosis was confirmed by 2 experienced interventional cardiologists and categorized as type 1 (multiple lumen), 2 (diffuse stenosis), or 3 (mimic atherosclerosis). Fibromuscular dysplasia screening of renal, iliac, and cerebrovascular arteries were performed with angiography or computed tomographic angiography/MR angiography. Baseline, predisposing and precipitating conditions, angiographic, revascularization, in-hospital, and long-term events were recorded. We prospectively evaluated 168 patients with NA-SCAD. Average age was 52.1±9.2 years, 92.3% were women (62.3% postmenopausal). All presented with myocardial infarction. ECG showed ST-segment elevation in 26.1%, and 3.6% had ventricular tachycardia/ventricular fibrillation arrest. Fibromuscular dysplasia was diagnosed in 72.0%. Precipitating emotional or physical stress was reported in 56.5%. Majority had type 2 angiographic SCAD (67.0%), only 29.1% had type 1, and 3.9% had type 3. The majority (134/168) were initially treated conservatively. Overall, 6 of 168 patients had coronary artery bypass surgery and 33 of 168 had percutaneous coronary intervention in-hospital. Of those treated conservatively (n=134), 3 required revascularization for SCAD extension, and all 79 who had repeat angiogram ≥26 days later had spontaneous healing. Two-year major adverse cardiac events were 16.9% (retrospectively identified group) and 10.4% (prospectively identified group). Recurrent SCAD occurred in 13.1%. Majority of patients with NA-SCAD had fibromuscular dysplasia and type 2 angiographic SCAD. Conservative therapy was associated with spontaneous healing. NA-SCAD survivors are at risk for recurrent cardiovascular events, including recurrent SCAD. © 2014 American Heart Association, Inc.

  18. Spontaneous Coronary Artery Dissection: Current State of the Science

    PubMed Central

    Hayes, Sharonne N.; Kim, Esther S.H.; Saw, Jacqueline; Adlam, David; Arslanian-Engoren, Cynthia; Economy, Katherine E.; Ganesh, Santhi K.; Gulati, Rajiv; Lindsay, Mark E.; Mieres, Jennifer H.; Naderi, Sahar; Shah, Svati; Thaler, David E.; Tweet, Marysia S.; Wood, Malissa J.

    2018-01-01

    Spontaneous coronary artery dissection (SCAD) has emerged as an important cause of acute coronary syndrome, myocardial infarction, and sudden death, particularly among young women and individuals with few conventional atherosclerotic risk factors. Patient-initiated research has spurred increased awareness of SCAD, and improved diagnostic capabilities and findings from large case series have led to changes in approaches to initial and long-term management and increasing evidence that SCAD not only is more common than previously believed but also must be evaluated and treated differently from atherosclerotic myocardial infarction. High rates of recurrent SCAD; its association with female sex, pregnancy, and physical and emotional stress triggers; and concurrent systemic arteriopathies, particularly fibromuscular dysplasia, highlight the differences in clinical characteristics of SCAD compared with atherosclerotic disease. Recent insights into the causes of, clinical course of, treatment options for, outcomes of, and associated conditions of SCAD and the many persistent knowledge gaps are presented. PMID:29472380

  19. Amplified and persistent immune responses generated by single-cycle replicating adenovirus vaccines.

    PubMed

    Crosby, Catherine M; Nehete, Pramod; Sastry, K Jagannadha; Barry, Michael A

    2015-01-01

    Replication-competent adenoviral (RC-Ad) vectors generate exceptionally strong gene-based vaccine responses by amplifying the antigen transgenes they carry. While they are potent, they also risk causing adenovirus infections. More common replication-defective Ad (RD-Ad) vectors with deletions of E1 avoid this risk but do not replicate their transgene and generate markedly weaker vaccine responses. To amplify vaccine transgenes while avoiding production of infectious progeny viruses, we engineered "single-cycle" adenovirus (SC-Ad) vectors by deleting the gene for IIIa capsid cement protein of lower-seroprevalence adenovirus serotype 6. In mouse, human, hamster, and macaque cells, SC-Ad6 still replicated its genome but prevented genome packaging and virion maturation. When used for mucosal intranasal immunization of Syrian hamsters, both SC-Ad and RC-Ad expressed transgenes at levels hundreds of times higher than that of RD-Ad. Surprisingly, SC-Ad, but not RC-Ad, generated higher levels of transgene-specific antibody than RD-Ad, which notably climbed in serum and vaginal wash samples over 12 weeks after single mucosal immunization. When RD-Ad and SC-Ad were tested by single sublingual immunization in rhesus macaques, SC-Ad generated higher gamma interferon (IFN-γ) responses and higher transgene-specific serum antibody levels. These data suggest that SC-Ad vectors may have utility as mucosal vaccines. This work illustrates the utility of our recently developed single-cycle adenovirus (SC-Ad6) vector as a new vaccine platform. Replication-defective (RD-Ad6) vectors produce low levels of transgene protein, which leads to minimal antibody responses in vivo. This study shows that replicating SC-Ad6 produces higher levels of luciferase and induces higher levels of green fluorescent protein (GFP)-specific antibodies than RD in a permissive Syrian hamster model. Surprisingly, although a replication-competent (RC-Ad6) vector produces more luciferase than SC-Ad6, it does not elicit comparable levels of anti-GFP antibodies in permissive hamsters. When tested in the larger rhesus macaque model, SC-Ad6 induces higher transgene-specific antibody and T cell responses. Together, these data suggest that SC-Ad6 could be a more effective platform for developing vaccines against more relevant antigens. This could be especially beneficial for developing vaccines for pathogens for which traditional replication-defective adenovirus vectors have not been effective. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  20. Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2013-01-01

    Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048

  1. Prevalence and predictors of depression and anxiety among survivors of myocardial infarction due to spontaneous coronary artery dissection.

    PubMed

    Liang, Jackson J; Tweet, Marysia S; Hayes, Sarah E; Gulati, Rajiv; Hayes, Sharonne N

    2014-01-01

    Depression and anxiety after myocardial infarction (MI) are common and associated with increased morbidity and mortality. The epidemiology and pathophysiology of MI due to spontaneous coronary artery dissection (SCAD) differs substantially from atherosclerotic MI, and rates of mental health comorbidities after SCAD are unknown. We aimed to determine the prevalence and predictors of depression/anxiety in SCAD survivors. In this cross-sectional study, 158 SCAD survivors (97% women; mean age, 45.5 ± 9.3 years) were screened for depression/anxiety via surveys, including the Patient Health Questionnaire Depression Scale (PHQ-9) and Generalized Anxiety Disorder 7-Item Scale (GAD-7), a mean 3.7 ± 4.7 years after SCAD. Comorbidities and environmental, socioeconomic, and clinical cardiovascular characteristics were obtained from the surveys. Since their initial SCAD MI, 51 (33%) patients had received treatment with medications or counseling for depression and 57 (37%) for anxiety. When surveyed, 46 (31.7%) were taking antidepressant or anxiolytic medications. Overall, mean PHQ-9 (4.1) and GAD-7 (4.7) scores suggested borderline mild depression/anxiety (normal range: 0-4). Younger age was associated with higher PHQ-9 (P = .04) and GAD-7 (P = .02) scores. The 19 (12%) patients with peripartum SCAD had higher mean PHQ-9 (6.7 vs 3.7; P < .0005) and GAD-7 (8.1 vs 4.3; P = .003) scores. Patients treated with percutaneous coronary intervention had lower PHQ-9 (1.5; P = .02) and GAD-7 (2.4; P = .004) scores. Symptoms of depression/anxiety are common in patients with MI due to SCAD, particularly younger women and those with peripartum SCAD. The PHQ-9 and GAD-7 assessments may detect depression/anxiety in SCAD survivors who do not self-report these disorders, suggesting a role for routine screening in these patients.

  2. Spontaneous coronary artery dissection—A review

    PubMed Central

    Yip, Amelia

    2015-01-01

    Spontaneous coronary artery dissection (SCAD) is an infrequent and often missed diagnosis among patients presenting with acute coronary syndrome (ACS). Unfortunately, SCAD can result in significant morbidities such as myocardial ischemia and infarction, ventricular arrhythmias and sudden cardiac death. Lack of angiographic recognition from clinicians is a major factor of under-diagnosis. With the advent of new imaging modalities, particularly with intracoronary imaging, there has been improved diagnosis of SCAD. The aim of this paper is to review the epidemiology, etiology, presentation, diagnosis and management of SCAD. PMID:25774346

  3. Spontaneous Coronary Artery Dissection: Current State of the Science: A Scientific Statement From the American Heart Association.

    PubMed

    Hayes, Sharonne N; Kim, Esther S H; Saw, Jacqueline; Adlam, David; Arslanian-Engoren, Cynthia; Economy, Katherine E; Ganesh, Santhi K; Gulati, Rajiv; Lindsay, Mark E; Mieres, Jennifer H; Naderi, Sahar; Shah, Svati; Thaler, David E; Tweet, Marysia S; Wood, Malissa J

    2018-05-08

    Spontaneous coronary artery dissection (SCAD) has emerged as an important cause of acute coronary syndrome, myocardial infarction, and sudden death, particularly among young women and individuals with few conventional atherosclerotic risk factors. Patient-initiated research has spurred increased awareness of SCAD, and improved diagnostic capabilities and findings from large case series have led to changes in approaches to initial and long-term management and increasing evidence that SCAD not only is more common than previously believed but also must be evaluated and treated differently from atherosclerotic myocardial infarction. High rates of recurrent SCAD; its association with female sex, pregnancy, and physical and emotional stress triggers; and concurrent systemic arteriopathies, particularly fibromuscular dysplasia, highlight the differences in clinical characteristics of SCAD compared with atherosclerotic disease. Recent insights into the causes of, clinical course of, treatment options for, outcomes of, and associated conditions of SCAD and the many persistent knowledge gaps are presented. © 2018 American Heart Association, Inc.

  4. Self-expanding stent for spontaneous coronary artery dissection: a rational choice.

    PubMed

    Mele, Marco; Langialonga, Tommaso; Maggi, Alessandro; Villella, Massimo; Villella, Alessandro

    2016-12-01

    : Spontaneous coronary artery dissection (SCAD) is a rare and poorly understood cause of acute coronary syndrome in relatively young patients. Nowadays, the optimal treatment of SCAD is uncertain. A conservative approach seems to be preferable, but in particular conditions, an invasive strategy is necessary. The poor rate of procedural success, the high risk of procedural complications and the uncertain long and mid-term results make the interventional treatment of SCAD a challenge. We report a case of a young male patient presenting with SCAD successfully treated with a sirolimus-eluting self-expanding coronary stent. To our knowledge, the use of self-expanding coronary stent for SCAD has never been described yet and we discuss about the rationale of a possible larger use in clinical practice.

  5. Feasibility study of scanning celestial Attitude System (SCADS) for Earth Resources Technology Satellite (ERTS)

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The feasibility of using the Scanning Celestial Attitude Determination System (SCADS) during Earth Resources Technology Satellite (ERTS) missions to compute an accurate spacecraft attitude by use of stellar measurements is considered. The spacecraft is local-vertical-stabilized. A heuristic discussion of the SCADS concept is first given. Two concepts are introduced: a passive system which contains no moving parts, and an active system in which the reticle is caused to rotate about the sensor's axis. A quite complete development of the equations of attitude motions is then given. These equations are used to generate the true attitude which in turn is used to compute the transit times of detectable stars and to determine the errors associated with the SCADS attitude. A more complete discussion of the analytical foundation of SCADS concept and its use for the geometries particular to this study, as well as salient design parameters for the passive and active systems are included.

  6. Spontaneous coronary artery dissection as a cause of myocardial infarction

    PubMed Central

    Aksakal, Aytekin; Arslan, Uğur; Yaman, Mehmet; Urumdaş, Mehmet; Ateş, Ahmet Hakan

    2014-01-01

    Spontaneous coronary artery dissection (SCAD) is a rare disease that is usually seen in young women in left descending coronary artery and result in events like sudden cardiac death and acute myocardial infarction. A 70-year-old man was admitted to the emergency department with chest pain which started 1 h ago during a relative’s funeral. The initial electrocardiography demonstrated 2 mm ST-segment depression in leads V1-V3 and the patient underwent emergent coronary angiography. SCAD simultaneously in two different coronary arteries [left anterior descending (LAD) artery and left circumflex (LCx)] artery was detected and SCAD in LCx artery was causing total occlusion which resulted in acute myocardial infarction. Successful stenting was performed thereafter for both lesions. In addition to the existence of SCAD simultaneously in two different coronary arteries, the presence of muscular bridge and SCAD together at the same site of the LAD artery was another interesting point which made us report this case. PMID:25548620

  7. Replicating Single-Cycle Adenovirus Vectors Generate Amplified Influenza Vaccine Responses.

    PubMed

    Crosby, Catherine M; Matchett, William E; Anguiano-Zarate, Stephanie S; Parks, Christopher A; Weaver, Eric A; Pease, Larry R; Webby, Richard J; Barry, Michael A

    2017-01-15

    Head-to-head comparisons of conventional influenza vaccines with adenovirus (Ad) gene-based vaccines demonstrated that these viral vectors can mediate more potent protection against influenza virus infection in animal models. In most cases, Ad vaccines are engineered to be replication-defective (RD-Ad) vectors. In contrast, replication-competent Ad (RC-Ad) vaccines are markedly more potent but risk causing adenovirus diseases in vaccine recipients and health care workers. To harness antigen gene replication but avoid production of infectious virions, we developed "single-cycle" adenovirus (SC-Ad) vectors. Previous work demonstrated that SC-Ads amplify transgene expression 100-fold and produce markedly stronger and more persistent immune responses than RD-Ad vectors in Syrian hamsters and rhesus macaques. To test them as potential vaccines, we engineered RD and SC versions of adenovirus serotype 6 (Ad6) to express the hemagglutinin (HA) gene from influenza A/PR/8/34 virus. We show here that it takes approximately 33 times less SC-Ad6 than RD-Ad6 to produce equal amounts of HA antigen in vitro SC-Ad produced markedly higher HA binding and hemagglutination inhibition (HAI) titers than RD-Ad in Syrian hamsters. SC-Ad-vaccinated cotton rats had markedly lower influenza titers than RD-Ad-vaccinated animals after challenge with influenza A/PR/8/34 virus. These data suggest that SC-Ads may be more potent vaccine platforms than conventional RD-Ad vectors and may have utility as "needle-free" mucosal vaccines. Most adenovirus vaccines that are being tested are replication-defective adenoviruses (RD-Ads). This work describes testing newer single-cycle adenovirus (SC-Ad) vectors that replicate transgenes to amplify protein production and immune responses. We show that SC-Ads generate markedly more influenza virus hemagglutinin protein and require substantially less vector to generate the same immune responses as RD-Ad vectors. SC-Ads therefore hold promise to be more potent vectors and vaccines than current RD-Ad vectors. Copyright © 2017 Crosby et al.

  8. Population dynamics of the yellowstripe scad (Selaroides leptolepis Cuvier, 1833) and Indian mackerel (Rastrelliger kanagurta Cuvier, 1816) in the Wondama Bay Water, Indonesia

    NASA Astrophysics Data System (ADS)

    Sala, R.; Bawole, R.; Runtuboi, F.; Mudjirahayu; Wopi, I. A.; Budisetiawan, J.; Irwanto

    2018-03-01

    The Wondama Bay water is located within the Cendrawasih Bay National Park and is potential for fishery resources, including pelagic fish such as yellowstripe scad (Selaroides leptolepis Cuvier, 1833) and Indian mackerel (Rastrelliger kanagurta Cuvier, 1816). Yet, information about the population dynamics of these species in the region is unknown until today. Meanwhile, the fishing activities have been quite intensive and include the dominant catches over the last ten years by traditional fishermen fishing using liftnets. Therefore, this study aims to determine some of specific characteristics of the population dynamics and fish utilization status of scad and mackerel in the waters of the Wondama Bay. Data used in this study were taken from direct observation of catch of liftnet fishery. The data then were analysed by using FISAT II to estimate the growth parameters, mortality rates, and yield per recruitment. The results showed that yellowstripe scad has the positive allometric growth, while Indian mackerel followed isometric growth. Models of fish growth were L(t) = 22 (1-e-3.0(t-0.05)) for yellowstripe scad and L(t) = 27.8 (1-e-4.0(t-0.04)) for Indian mackerel. The natural mortality (M) of 4.19 year-1, fishing mortality (F) of 5.01 year-1, and total mortality (Z) of 9.20 year-1 were for yellowstripe scad, and M of 4.74 year-1, F of 2.52 year-1 and Z of 7.26 year-1 were for Indian mackerel. Based on the mortality rates, estimated exploitation rate for the yellowatripe scad was 54 % and the Indian mackerel was 35 %. To increase the production of catch without increasing fishing effort (fishing mortality) can be done by increasing the size of fish caught or the Lc/L∞ should be greater than 0.5.

  9. Spontaneous Coronary Dissection: “Live Flash” Optical Coherence Tomography Guided Angioplasty

    PubMed Central

    Bento, Angela Pimenta; Fernandes, Renato Gil dos Santos Pinto; Neves, David Cintra Henriques Silva; Patrício, Lino Manuel Ribeiro; de Aguiar, José Eduardo Chambel

    2016-01-01

    Optical Coherence tomography (OCT) is a light-based imaging modality which shows tremendous potential in the setting of coronary imaging. Spontaneous coronary artery dissection (SCAD) is an infrequent cause of acute coronary syndrome (ACS). The diagnosis of SCAD is made mainly with invasive coronary angiography, although adjunctive imaging modalities such as computed tomography angiography, IVUS, and OCT may increase the diagnostic yield. The authors describe a clinical case of a young woman admitted with the diagnosis of ACS. The ACS was caused by SCAD detected in the coronary angiography and the angioplasty was guided by OCT. OCT use in the setting of SCAD has been already described and the true innovation in this case was this unique use of OCT. The guidance of angioplasty with live and short images was very useful as it allowed clearly identifying the position of the guidewires at any given moment without the use of prohibitive amounts of contrast. PMID:26989520

  10. Spontaneous Coronary Dissection: "Live Flash" Optical Coherence Tomography Guided Angioplasty.

    PubMed

    Bento, Angela Pimenta; Fernandes, Renato Gil Dos Santos Pinto; Neves, David Cintra Henriques Silva; Patrício, Lino Manuel Ribeiro; de Aguiar, José Eduardo Chambel

    2016-01-01

    Optical Coherence tomography (OCT) is a light-based imaging modality which shows tremendous potential in the setting of coronary imaging. Spontaneous coronary artery dissection (SCAD) is an infrequent cause of acute coronary syndrome (ACS). The diagnosis of SCAD is made mainly with invasive coronary angiography, although adjunctive imaging modalities such as computed tomography angiography, IVUS, and OCT may increase the diagnostic yield. The authors describe a clinical case of a young woman admitted with the diagnosis of ACS. The ACS was caused by SCAD detected in the coronary angiography and the angioplasty was guided by OCT. OCT use in the setting of SCAD has been already described and the true innovation in this case was this unique use of OCT. The guidance of angioplasty with live and short images was very useful as it allowed clearly identifying the position of the guidewires at any given moment without the use of prohibitive amounts of contrast.

  11. The growth and exploitation rate of yellowstripe scad (selaroides leptolepis cuvier, 1833) in the Malacca Strait, Medan Belawan Subdistrict, North Sumatera Province

    NASA Astrophysics Data System (ADS)

    Tambun, J.; Bakti, D.; Desrita

    2018-02-01

    Yellowstripe scad included the one of commodity that has an important economic value in the Malacca Strait. Fish were found mostly in Indonesian of waters made this fish as one of the main target catch. But, it can had negative impact on the population of the fish. The study is done at Belawan Waters on March until May 2017 that which is purposed to study about the frequency distribution of length, determine the parameters of growth and, determine mortality rate and the rate of exploitation in order to provide appropriate management model for the fish resource. Yellowstripe scad was observed around 360 samples with the length range between 110 - 175 mm. The fish separated by bhattacarya method used the aid software FISAT II. A pattern of growth Yellowstripe scad alometrik negative with growth coefisien (K) 1.1 with length asimtotic (L∞) 181.65. The rate of mortality total ( Z) yellowstripe scad 4.34 per year at the rate of mortality natural ( M ) 1.204 per year and rate mortality by fishing (F) 3.136 per year in order to obtain the rate of exploitation 0.722. The value of this exploitation rate has exceeded the value of the optimum exploitation of 0.5.

  12. Investigation into the development of computer aided design software for space based sensors

    NASA Technical Reports Server (NTRS)

    Pender, C. W.; Clark, W. L.

    1987-01-01

    The described effort is phase one of the development of a Computer Aided Design (CAD) software to be used to perform radiometric sensor design. The software package will be referred to as SCAD and is directed toward the preliminary phase of the design of space based sensor system. The approach being followed is to develop a modern, graphic intensive, user friendly software package using existing software as building blocks. The emphasis will be directed toward the development of a shell containing menus, smart defaults, and interfaces, which can accommodate a wide variety of existing application software packages. The shell will offer expected utilities such as graphics, tailored menus, and a variety of drivers for I/O devices. Following the development of the shell, the development of SCAD is planned as chiefly selection and integration of appropriate building blocks. The phase one development activities have included: the selection of hardware which will be used with SCAD; the determination of the scope of SCAD; the preliminary evaluation of a number of software packages for applicability to SCAD; determination of a method for achieving required capabilities where voids exist; and then establishing a strategy for binding the software modules into an easy to use tool kit.

  13. Evaluation of a train-the-trainer program for stable coronary artery disease management in community settings: A pilot study.

    PubMed

    Shen, Zhiyun; Jiang, Changying; Chen, Liqun

    2018-02-01

    To evaluate the feasibility and effectiveness of conducting a train-the-trainer (TTT) program for stable coronary artery disease (SCAD) management in community settings. The study involved two steps: (1) tutors trained community nurses as trainers and (2) the community nurses trained patients. 51 community nurses attended a 2-day TTT program and completed questionnaires assessing knowledge, self-efficacy, and satisfaction. By a feasibility and non-randomized control study, 120 SCAD patients were assigned either to intervention group (which received interventions from trained nurses) or control group (which received routine management). Pre- and post-intervention, patients' self-management behaviors and satisfaction were assessed to determine the program's overall impact. Community nurses' knowledge and self-efficacy improved (P<0.001), as did intervention group patients' self-management behaviors (P<0.001). The satisfaction of community nurses and patients was all very positive after training. The TTT program for SCAD management in community settings in China was generally feasible and effective, but many obstacles remain including patients' noncompliance, nurses' busy work schedules, and lack of policy supports. Finding ways to enhance the motivation of community nurses and patients with SCAD are important in implementing community-based TTT programs for SCAD management; further multicenter and randomized control trials are needed. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Pregnancy-Related Coronary Artery Dissection: Recognition of a Life Threatening Process.

    PubMed

    Robinson, Julie R

    Pregnancy-related spontaneous coronary artery dissection (P-SCAD) is a rare but life-threatening condition of the peripartum and postpartum mother. The gold standard of diagnosing P-SCAD is a left cardiac catheterization; however, this diagnostic tool may not be used early because myocardial infarction is not typically a top differential diagnosis for women and especially young pregnant women presenting with acute chest pain. Providers and registered nurses, particularly those in the prehospital setting, the emergency department, and labor and delivery units, should be aware of signs, symptoms, potential risk factors, and diagnostic results that could indicate P-SCAD and initiate early and appropriate treatment to improve maternal outcomes.

  15. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  16. [Heart rate as a therapeutic target after acute coronary syndrome and in chronic coronary heart disease].

    PubMed

    Ambrosetti, Marco; Scardina, Giuseppe; Favretto, Giuseppe; Temporelli, Pier Luigi; Faggiano, Pompilio Massimo; Greco, Cesare; Pedretti, Roberto Franco

    2017-03-01

    For patients with stable coronary artery disease (SCAD), either after hospitalization for acute cardiac events or in the chronic phase, comprehensive treatment programs should be devoted to: (i) reducing mortality and major adverse cardiovascular events, (ii) reducing the ischemic burden and related symptoms, and (iii) increasing exercise capacity and quality of life.Heart rate (HR) has demonstrated to have prognostic value and patients beyond the limit of 70 bpm display increased risk of all the above adverse outcomes, even after adjustment for parameters such as the extension of myocardial infarction and the presence of heart failure. It is well known that a sustained HR elevation may contribute to the pathogenesis of SCAD, being the likelihood of developing ischemia, plaque instability, trigger for arrhythmias, increased vascular oxidative stress, and endothelial dysfunction the mechanisms resulting in this effect. Moreover, high HR could promote chronotropic incompetence, leading to functional disability and reduced quality of life.Despite the strong relationship between HR and prognosis, there is heterogeneity among current guidelines in considering HR as a formal therapeutic target for secondary prevention in SCAD, as far as the cut-off limit. This expert opinion document considered major trials and observational registries in the modern treatment era with beta-blockers and ivabradine, suggesting that an adequate HR control could represent a target for (i), (ii), and (iii) therapeutic goals in SCAD patients with systolic dysfunction (with major evidence for reduced left ventricular ejection fraction <40%), and a target for (ii) and (iii) goals in SCAD patients with preserved left ventricular ejection fraction. The defined cut-off limit is 70 bpm. To date, there is room for improvement of HR control, since in contemporary SCAD patients HR values <70 bpm are present in less than half of cases, even in the vulnerable phase after an acute coronary syndrome.

  17. The role of vaspin as a predictor of coronary angiography result in SCAD (stable coronary artery disease) patients.

    PubMed

    Stančík, Matej; Ságová, Ivana; Kantorová, Ema; Mokáň, Marián

    2017-05-08

    The role of vaspin in the pathogenesis of stable coronary artery disease (SCAD) have been repeatedly addressed in clinical studies. However, from the point of view of clinical practice, the results of earlier studies are still inconclusive. The data of 106 SCAD patients who received coronary angiography and 85 coronary artery disease-free controls were collected and analysed. The patients were divided into subgroups according to their pre-test probability (PTP) and according to the result of coronary angiography. Fasting vaspin concentrations were compared between subgroups of SCAD patients and between target group and controls. The effect of age and smoking on the result of coronary angiography was compared to the effect of vaspin using the binomial regression. We did not find significant difference in vaspin level between target group and controls. Unless the pre-test probability was taken into account, we did not find vaspin difference in the target group, when dividing patients on the basis of presence/absence of significant coronary stenosis. In the subgroup of SCAD patients with PTP between 15% - 65%, those with significant coronary stenoses had higher mean vaspin concentration (0,579 ± 0,898 ng/ml) than patients without significant stenoses. (0,379 ± 0,732 ng/ml) (t = -2595; p = 0,012; d = 0,658; 1-β = 0,850). Age, smoking status and vaspin significantly contributed to the HSCS prediction in binomial regression model in patients with low PTP (OR: 1.1, 4.9, 8.7, respectively). According to our results, vaspin cannot be used as an independent marker for the presence of CAD in general population. However, our results indicate that measuring vaspin in SCAD patients might be clinically useful in patients with PTP below 66%.

  18. Management standards for stable coronary artery disease in India.

    PubMed

    Mishra, Sundeep; Ray, Saumitra; Dalal, Jamshed J; Sawhney, J P S; Ramakrishnan, S; Nair, Tiny; Iyengar, S S; Bahl, V K

    2016-12-01

    Coronary artery disease (CAD) is one of the important causes of cardiovascular morbidity and mortality globally, giving rise to more than 7 million deaths annually. An increasing burden of CAD in India is a major cause of concern with angina being the leading manifestation. Stable coronary artery disease (SCAD) is characterised by episodes of transient central chest pain (angina pectoris), often triggered by exercise, emotion or other forms of stress, generally triggered by a reversible mismatch between myocardial oxygen demand and supply resulting in myocardial ischemia or hypoxia. A stabilised, frequently asymptomatic phase following an acute coronary syndrome (ACS) is also classified as SCAD. This definition of SCAD also encompasses vasospastic and microvascular angina under the common umbrella. Copyright © 2016. Published by Elsevier B.V.

  19. Brain embolic phenomena associated with cardiopulmonary bypass.

    PubMed

    Challa, V R; Moody, D M; Troost, B T

    1993-07-01

    Various biologic and non-biologic materials may be embolized to the brain after the use of cardiopulmonary bypass (CPB) pumps during open heart surgery but their relative frequency and importance are uncertain. Among the nonbiologic materials, Antifoam A, which contains organosilicates and silicon, continues to be employed as an additive to prevent frothing. Recent improvements in filtration and oxygenation techniques have clearly reduced the incidence of large emboli and complications like stroke but other neurologic sequelae following open heart surgery are common and in many cases poorly explained. A recently developed histochemical technique for the demonstration of the endothelial alkaline phosphatase (AP) was employed in a post-mortem study of brains from 8 patients and 6 dogs dying within a few days after open heart surgery employing cardiopulmonary bypass perfusion. Brains from 38 patients and 6 dogs who were not subjected to heart surgery were studied as controls with the same technique. The AP-stained slides are suitable for both light microscopic examination of the thick celloidin sections as well as a subsequent processing for high-resolution microradiography. Small capillary and arteriolar dilatations (SCADs) were seen in the test subjects/animals but not controls. SCADs were seen in all parts of the brain. Approximately 50% of the SCADs showed birefringence when examined with polarized light. SCADs are putative embolic phenomena and the exact nature and source of the embolic material is under investigation. A glycolipid component is indicated by preliminary studies. SCADs are difficult to find in routine paraffin sections and most if not all of the offending material seems to be dissolved during processing.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Academic Freedom and Tenure: Savannah College of Art and Design. A Supplementary Report on a Censured Administration. Report

    ERIC Educational Resources Information Center

    American Association of University Professors, 2011

    2011-01-01

    This paper presents a supplementary report on the Savannah College of Art and Design (SCAD) censure. Placement of the Savannah College of Art and Design on the Association's censure list, by the 1993 annual meeting, followed from the SCAD administration's dismissal of two faculty members without having demonstrated cause, thereby denying them…

  1. DRG coding practice: a nationwide hospital survey in Thailand.

    PubMed

    Pongpirul, Krit; Walker, Damian G; Rahman, Hafizur; Robinson, Courtland

    2011-10-31

    Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored. This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice. A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis. SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention. Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings.

  2. Age determination of vessel wall hematoma in spontaneous cervical artery dissection: A multi-sequence 3T Cardiovascular Magnetic resonance study

    PubMed Central

    2011-01-01

    Background Previously proposed classifications for carotid plaque and cerebral parenchymal hemorrhages are used to estimate the age of hematoma according to its signal intensities on T1w and T2w MR images. Using these classifications, we systematically investigated the value of cardiovascular magnetic resonance (CMR) in determining the age of vessel wall hematoma (VWH) in patients with spontaneous cervical artery dissection (sCAD). Methods 35 consecutive patients (mean age 43.6 ± 9.8 years) with sCAD received a cervical multi-sequence 3T CMR with fat-saturated black-blood T1w-, T2w- and TOF images. Age of sCAD was defined as time between onset of symptoms (stroke, TIA or Horner's syndrome) and the CMR scan. VWH were categorized into hyperacute, acute, early subacute, late subacute and chronic based on their signal intensities on T1w- and T2w images. Results The mean age of sCAD was 2.0, 5.8, 15.7 and 58.7 days in patients with acute, early subacute, late subacute and chronic VWH as classified by CMR (p < 0.001 for trend). Agreement was moderate between VWH types in our study and the previously proposed time scheme of signal evolution for cerebral hemorrhage, Cohen's kappa 0.43 (p < 0.001). There was a strong agreement of CMR VWH classification compared to the time scheme which was proposed for carotid intraplaque hematomas with Cohen's kappa of 0.74 (p < 0.001). Conclusions Signal intensities of VWH in sCAD vary over time and multi-sequence CMR can help to determine the age of an arterial dissection. Furthermore, findings of this study suggest that the time course of carotid hematomas differs from that of cerebral hematomas. PMID:22122756

  3. CT versus MR Techniques in the Detection of Cervical Artery Dissection.

    PubMed

    Hanning, Uta; Sporns, Peter B; Schmiedel, Meilin; Ringelstein, Erich B; Heindel, Walter; Wiendl, Heinz; Niederstadt, Thomas; Dittrich, Ralf

    2017-11-01

    Spontaneous cervical artery dissection (sCAD) is an important etiology of juvenile stroke. The gold standard for the diagnosis of sCAD is convential angiography. However, magnetic resonance imaging (MRI)/MR angiography (MRA) and computed tomography (CT)/CT angiography (CTA) are frequently used alternatives. New developments such as multislice CT/CTA have enabled routine acquisition of thinner sections with rapid imaging times. The goal of this study was to compare the capability of recent developed 128-slice CT/CTA to MRI/MRA to detect radiologic features of sCAD. Retrospective review of patients with suspected sCAD (n = 188) in a database of our Stroke center (2008-2014), who underwent CT/CTA and MRI/MRA on initial clinical work-up. A control group of 26 patients was added. All Images were evaluated concerning specific and sensitive radiological features for dissection by two experienced neuroradiologists. Imaging features were compared between the two modalities. Forty patients with 43 dissected arteries received both modalities (29 internal carotid arteries [ICAs] and 14 vertebral arteries [VAs]). All CADs were identified in CT/CTA and MRI/MRA. The features intimal flap, stenosis, and lumen irregularity appeared in both modalities. One high-grade stenosis was identified by CT/CTA that was expected occluded on MRI/MRA. Two MRI/MRA-confirmed pseudoaneurysms were missed by CT/CTA. None of the controls evidenced specific imaging signs for dissection. CT/CTA is a reliable and better available alternative to MRI/MRA for diagnosis of sCAD. CT/CTA should be used to complement MRI/MRA in cases where MRI/MRA suggests occlusion. Copyright © 2017 by the American Society of Neuroimaging.

  4. Etiology of Sudden Cardiac Arrest and Death in US Competitive Athletes: A 2-Year Prospective Surveillance Study.

    PubMed

    Peterson, Danielle F; Siebert, David M; Kucera, Kristen L; Thomas, Leah Cox; Maleszewski, Joseph J; Lopez-Anderson, Martha; Suchsland, Monica Z; Harmon, Kimberly G; Drezner, Jonathan A

    2018-04-09

    To determine the etiology of sudden cardiac arrest and death (SCA/D) in competitive athletes through a prospective national surveillance program. Sudden cardiac arrest and death cases in middle school, high school, college, and professional athletes were identified from July 2014 to June 2016 through traditional and social media searches, reporting to the National Center for Catastrophic Sports Injury Research, communication with state and national high school associations, review of the Parent Heart Watch database, and search of student-athlete deaths on the NCAA Resolutions List. Autopsy reports and medical records were reviewed by a multidisciplinary panel to determine the underlying cause. US competitive athletes with SCA/D. Etiology of SCA/D. A total of 179 cases of SCA/D were identified (74 arrests with survival, 105 deaths): average age 16.6 years (range 11-29), 149 (83.2%) men, 94 (52.5%) whites, and 54 (30.2%) African American. One hundred seventeen (65.4%) had an adjudicated diagnosis, including 83 deaths and 34 survivors. The most common etiologies included hypertrophic cardiomyopathy (19, 16.2%), coronary artery anomalies (16, 13.7%), idiopathic left ventricular hypertrophy/possible cardiomyopathy (13, 11.1%), autopsy-negative sudden unexplained death (8, 6.8%), Wolff-Parkinson-White (8, 6.8%), and long QT syndrome (7, 6.0%). Hypertrophic cardiomyopathy was more common in male basketball (23.3%), football (25%), and African American athletes (30.3%). An estimated 56.4% of cases would likely demonstrate abnormalities on an electrocardiogram. The etiology of SCA/D in competitive athletes involves a wide range of clinical disorders. More robust reporting mechanisms, standardized autopsy protocols, and accurate etiology data are needed to better inform prevention strategies.

  5. [Dynamics of numbers of commercial fish in early ontogenesis in different areas of the Central-Eastern Atlantic].

    PubMed

    Arkhipov, A G; Mamedov, A A; Simonova, T A; Tenitskaia, I A

    2011-01-01

    Changes in the quantitative composition of mass fish species at early stages of ontogenesis in different areas of the Central-Eastern Atlantic (CEA) in warm and cold seasons in 1994-2008 were analyzed in the paper. The most widespread representatives of ichthyocenosis of CEA were: European pilchard (Sardina pilchardus), common scad (Trachurus trachurus), round sardinella (Sardinella aurita), and West-African scad (Trachrus trecae). The data obtained indicate that, within the economic zone of Morocco, fluctuations of numbers at early stages of development in European pilchard and common scad are close over the entire water area under consideration (36 degrees-21 degrees N). The regularities of fluctuations of the numbers of ichthyoplankton are similar to the interannual changes in the biomass of fish in the area of Morocco. In the area of Mauritania (21 degrees-16 degrees N), fluctuations of numbers of the early stages of development of commercial fish cannot be unambiguously correlated with changes in the biomass of adult fish. It is known that, in the economic zone of Mauritania, there are Senegal-Mauritanian populations of round sardinella and West-African scad that inhabit waters of different states and are not completely assessed by our surveys. Therefore, no obvious relation was observed between the considered data.

  6. DRG coding practice: a nationwide hospital survey in Thailand

    PubMed Central

    2011-01-01

    Background Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored. Objectives This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice. Methods A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis. Results SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention. Conclusion Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings. PMID:22040256

  7. A TVSCAD approach for image deblurring with impulsive noise

    NASA Astrophysics Data System (ADS)

    Gu, Guoyong; Jiang, Suhong; Yang, Junfeng

    2017-12-01

    We consider image deblurring problem in the presence of impulsive noise. It is known that total variation (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both corrupted and noise free, are equally penalized in data fitting, leading to insurmountable difficulty in balancing regularization and data fitting. In this paper, we propose to combine TV regularization with nonconvex smoothly clipped absolute deviation (SCAD) penalty for data fitting (TVSCAD for short). Our motivation is simply that data fitting should be enforced only when an observed data is not severely corrupted, while for those data more likely to be severely corrupted, less or even null penalization should be enforced. A difference of convex functions algorithm is adopted to solve the nonconvex TVSCAD model, resulting in solving a sequence of TVL1-equivalent problems, each of which can then be solved efficiently by the alternating direction method of multipliers. Theoretically, we establish global convergence to a critical point of the nonconvex objective function. The R-linear and at-least-sublinear convergence rate results are derived for the cases of anisotropic and isotropic TV, respectively. Numerically, experimental results are given to show that the TVSCAD approach improves those of the TVL1 significantly, especially for cases with high level impulsive noise, and is comparable with the recently proposed iteratively corrected TVL1 method (Bai et al 2016 Inverse Problems 32 085004).

  8. Transgene Expression and Host Cell Responses to Replication-Defective, Single-Cycle, and Replication-Competent Adenovirus Vectors.

    PubMed

    Crosby, Catherine M; Barry, Michael A

    2017-02-18

    Most adenovirus (Ad) vectors are E1 gene deleted replication defective (RD-Ad) vectors that deliver one transgene to the cell and all expression is based on that one gene. In contrast, E1-intact replication-competent Ad (RC-Ad) vectors replicate their DNA and their transgenes up to 10,000-fold, amplifying transgene expression markedly higher than RD-Ad vectors. While RC-Ad are more potent, they run the real risk of causing adenovirus infections in vector recipients and those that administer them. To gain the benefits of transgene amplification, but avoid the risk of Ad infections, we developed "single cycle" Ad (SC-Ad) vectors. SC-Ads amplify transgene expression and generated markedly stronger and more persistent immune responses than RD-Ad as expected. However, they also unexpectedly generated stronger immune responses than RC-Ad vectors. To explore the basis of this potency here, we compared gene expression and the cellular responses to infection to these vectors in vitro and in vivo. In vitro, in primary human lung epithelial cells, SC- and RC-Ad amplified their genomes more than 400-fold relative to RD-Ad with higher replication by SC-Ad. This replication translated into higher green fluorescent protein (GFP) expression for 48 h by SC- and RC-Ad than by RD-Ad. In vitro, in the absence of an immune system, RD-Ad expression became higher by 72 h coincident with cell death mediated by SC- and RC-Ad and release of transgene product from the dying cells. When the vectors were compared in human THP-1 Lucia- interferon-stimulated gene (ISG) cells, which are a human monocyte cell line that have been modified to quantify ISG activity, RC-Ad6 provoked significantly stronger ISG responses than RD- or SC-Ad. In mice, intravenous or intranasal injection produced up to 100-fold genome replication. Under these in vivo conditions in the presence of the immune system, luciferase expression by RC and SC-Ad was markedly higher than that by RD-Ad. In immunodeficient mice, SC-Ad drove stronger luciferase expression than RC- or RD-Ad. These data demonstrate better transgene expression by SC- and RC-Ad in vitro and in vivo than RD-Ad. This higher expression by the replicating vectors results in a peak of expression within 1 to 2 days followed by cell death of infected cells and release of transgene products. While SC- and RC-Ad expression were similar in mice and in Syrian hamsters, RC-Ad provoked much stronger ISG induction which may explain in part SC-Ad's ability to generate stronger and more persistent immune responses than RC-Ad in Ad permissive hamsters.

  9. The Diagnosis of Spontaneous Coronary Artery Dissection by Optical Coherence Tomography.

    PubMed

    Kanda, Takahiro; Tawarahara, Kei; Matsukura, Gaku; Matsunari, Masayoshi; Takabayashi, Rumi; Tamura, Jun; Ozeki, Mariko; Ukigai, Hiroshi

    2018-02-15

    Spontaneous coronary artery dissection (SCAD) is rare, but it frequently presents as acute myocardial infarction. It is frequently fatal and most cases are diagnosed at autopsy. We herein present the case of a 65-year-old woman with ST-elevation and myocardial infarction due to SCAD. Optical coherence tomography (OCT) helped us to confirm the diagnosis. The information on the intravascular morphology provided by OCT imaging is much more detailed in comparison to that provided by coronary angiography (CAG) and intravascular ultrasound (IVUS).

  10. The Effectiveness of a Rater Training Booklet in Increasing Accuracy of Performance Ratings

    DTIC Science & Technology

    1988-04-01

    subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...r IS % _. Findings: The absolute deviation scores of each individual’s ratings from the "true score" provided by subject matter experts were analyzed

  11. Preprocedural High-Sensitivity Cardiac Troponin T and Clinical Outcomes in Patients With Stable Coronary Artery Disease Undergoing Elective Percutaneous Coronary Intervention.

    PubMed

    Zanchin, Thomas; Räber, Lorenz; Koskinas, Konstantinos C; Piccolo, Raffaele; Jüni, Peter; Pilgrim, Thomas; Stortecky, Stefan; Khattab, Ahmed A; Wenaweser, Peter; Bloechlinger, Stefan; Moschovitis, Aris; Frenk, Andre; Moro, Christina; Meier, Bernhard; Fiedler, Georg M; Heg, Dik; Windecker, Stephan

    2016-06-01

    Cardiac troponin detected by new-generation, highly sensitive assays predicts clinical outcomes among patients with stable coronary artery disease (SCAD) treated medically. The prognostic value of baseline high-sensitivity cardiac troponin T (hs-cTnT) elevation in SCAD patients undergoing elective percutaneous coronary interventions is not well established. This study assessed the association of preprocedural levels of hs-cTnT with 1-year clinical outcomes among SCAD patients undergoing percutaneous coronary intervention. Between 2010 and 2014, 6974 consecutive patients were prospectively enrolled in the Bern Percutaneous Coronary Interventions Registry. Among patients with SCAD (n=2029), 527 (26%) had elevated preprocedural hs-cTnT above the upper reference limit of 14 ng/L. The primary end point, mortality within 1 year, occurred in 20 patients (1.4%) with normal hs-cTnT versus 39 patients (7.7%) with elevated baseline hs-cTnT (P<0.001). Patients with elevated hs-cTnT had increased risks of all-cause (hazard ratio 5.73; 95% confidence intervals 3.34-9.83; P<0.001) and cardiac mortality (hazard ratio 4.68; 95% confidence interval 2.12-10.31; P<0.001). Preprocedural hs-TnT elevation remained an independent predictor of 1-year mortality after adjustment for relevant risk factors, including age, sex, and renal failure (adjusted hazard ratio 2.08; 95% confidence interval 1.10-3.92; P=0.024). A graded mortality risk was observed across higher tertiles of elevated preprocedural hs-cTnT, but not among patients with hs-cTnT below the upper reference limit. Preprocedural elevation of hs-cTnT is observed in one fourth of SCAD patients undergoing elective percutaneous coronary intervention. Increased levels of preprocedural hs-cTnT are proportionally related to the risk of death and emerged as independent predictors of all-cause mortality within 1 year. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02241291. © 2016 American Heart Association, Inc.

  12. Spontaneous coronary artery dissection and its association with heritable connective tissue disorders.

    PubMed

    Henkin, Stanislav; Negrotto, Sara M; Tweet, Marysia S; Kirmani, Salman; Deyle, David R; Gulati, Rajiv; Olson, Timothy M; Hayes, Sharonne N

    2016-06-01

    Spontaneous coronary artery dissection (SCAD) is an under-recognised but important cause of myocardial infarction and sudden cardiac death. We sought to determine the role of medical and molecular genetic screening for connective tissue disorders in patients with SCAD. We performed a single-centre retrospective descriptive analysis of patients with spontaneous coronary artery disease who had undergone medical genetics evaluation 1984-2014 (n=116). The presence or absence of traits suggestive of heritable connective tissue disease was extracted. Genetic testing for connective tissue disorders and/or aortopathies, if performed, is also reported. Of the 116 patients (mean age 44.2 years, 94.8% women and 41.4% with non-coronary fibromuscular dysplasia (FMD)), 59 patients underwent genetic testing, of whom 3 (5.1%) received a diagnosis of connective tissue disorder: a 50-year-old man with Marfan syndrome; a 43-year-old woman with vascular Ehlers-Danlos syndrome and FMD; and a 45-year-old woman with vascular Ehlers-Danlos syndrome. An additional 12 patients (20.3%) had variants of unknown significance, none of which was thought to be a definite disease-causing mutation based on in silico analyses. Only a minority of patients with SCAD who undergo genetic evaluation have a likely pathogenic mutation identified on gene panel testing. Even fewer exhibit clinical features of connective tissue disorder. These findings underscore the need for further studies to elucidate the molecular mechanisms of SCAD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. The correlation between lymphocyte/monocyte ratio and coronary collateral circulation in stable coronary artery disease patients.

    PubMed

    Kurtul, Alparslan; Duran, Mustafa

    2017-01-01

    Coronary collateral circulation (CCC) has an important impact on cardiovascular prognosis and well-developed CCC is associated with better clinical outcomes. We investigated whether lymphocyte/monocyte ratio (LMR) has an association with CCC in patients with stable coronary artery disease (SCAD). The study population consisted of 245 patients with SCAD. Patients were classified into a poor CCC group (Rentrop grades 0/1, n = 87), or good CCC group (Rentrop grades 2/3, n = 158). LMR values were significantly higher in patients with good CCC than in those with poor CCC (4.41 ± 1.58 vs 2.76 ± 1.10; p < 0.001). In receiver operating characteristic analysis, optimal cutoff of LMR for predicting well-developed CCC was 3.38. In multivariate analysis, LMR >3.38 (OR 4.637; p = 0.004), high sensitivity C-reactive protein (OR 0.810, p < 0.001), dyslipidemia (OR 2.485; p = 0.039), and presence of chronic total occlusion (OR 16.836; p < 0.001) were independent predictors of well-developed CCC. Increased LMR predicts well-developed CCC in SCAD patients.

  14. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  15. Long-term healthcare use and costs in patients with stable coronary artery disease: a population-based cohort using linked health records (CALIBER)

    PubMed Central

    Walker, Simon; Asaria, Miqdad; Manca, Andrea; Palmer, Stephen; Gale, Chris P.; Shah, Anoop Dinesh; Abrams, Keith R.; Crowther, Michael; Timmis, Adam; Hemingway, Harry; Sculpher, Mark

    2016-01-01

    Abstract Aims To examine long-term healthcare utilization and costs of patients with stable coronary artery disease (SCAD). Methods and results Linked cohort study of 94 966 patients with SCAD in England, 1 January 2001 to 31 March 2010, identified from primary care, secondary care, disease, and death registries. Resource use and costs, and cost predictors by time and 5-year cardiovascular disease (CVD) risk profile were estimated using generalized linear models. Coronary heart disease hospitalizations were 20.5% in the first year and 66% in the year following a non-fatal (myocardial infarction, ischaemic or haemorrhagic stroke) event. Mean healthcare costs were £3133 per patient in the first year and £10 377 in the year following a non-fatal event. First-year predictors of cost included sex (mean cost £549 lower in females), SCAD diagnosis (non-ST-elevation myocardial infarction cost £656 more than stable angina), and co-morbidities (heart failure cost £657 more per patient). Compared with lower risk patients (5-year CVD risk 3.5%), those of higher risk (5-year CVD risk 44.2%) had higher 5-year costs (£23 393 vs. £9335) and lower lifetime costs (£43 020 vs. £116 888). Conclusion Patients with SCAD incur substantial healthcare utilization and costs, which varies and may be predicted by 5-year CVD risk profile. Higher risk patients have higher initial but lower lifetime costs than lower risk patients as a result of shorter life expectancy. Improved cardiovascular survivorship among an ageing CVD population is likely to require stratified care in anticipation of the burgeoning demand. PMID:27042338

  16. Amberstripe scad Decapterus muroadsi (Carangidae) fish ingest blue microplastics resembling their copepod prey along the coast of Rapa Nui (Easter Island) in the South Pacific subtropical gyre.

    PubMed

    Ory, Nicolas Christian; Sobral, Paula; Ferreira, Joana Lia; Thiel, Martin

    2017-05-15

    An increasing number of studies have described the presence of microplastics (≤5mm) in many different fish species, raising ecological concerns. The factors influencing the ingestion of microplastics by fish remain unclear despite their importance to a better understanding of the routes of microplastics through marine food webs. Here, we compare microplastics and planktonic organisms in surface waters and as food items of 20 Amberstripe scads (Decapterus muroadsi) captured along the coast of Rapa Nui (Easter Island) to assess the hypothesis that fish ingest microplastics resembling their natural prey. Sixteen (80%) of the scad had ingested one to five microplastics, mainly blue polyethylene fragments that were similar in colour and size to blue copepod species consumed by the same fish. These results suggest that planktivorous fish, as a consequence of their feeding behaviour as visual predators, are directly exposed to floating microplastics. This threat may be exacerbated in the clear oceanic waters of the subtropical gyres, where anthropogenic litter accumulates in great quantity. Our study highlights the menace of microplastic contamination on the integrity of fragile remote ecosystems and the urgent need for efficient plastic waste management. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. [Impact of plasma pro-B-type natriuretic peptide amino-terminal and galectin-3 levels on the predictive capacity of the LIPID Clinical Risk Scale in stable coronary disease].

    PubMed

    Higueras, Javier; Martín-Ventura, José Luis; Blanco-Colio, Luis; Cristóbal, Carmen; Tarín, Nieves; Huelmos, Ana; Alonso, Joaquín; Pello, Ana; Aceña, Álvaro; Carda, Rocío; Lorenzo, Óscar; Mahíllo-Fernández, Ignacio; Asensio, Dolores; Almeida, Pedro; Rodríguez-Artalejo, Fernando; Farré, Jerónimo; López Bescós, Lorenzo; Egido, Jesús; Tuñón, José

    2015-01-01

    At present, there is no tool validated by scientific societies for risk stratification of patients with stable coronary artery disease (SCAD). It has been shown that plasma levels of monocyte chemoattractant protein-1 (MCP-1), galectin-3 and pro-B-type natriuretic peptide amino-terminal (NT-proBNP) have prognostic value in this population. To analyze the prognostic value of a clinical risk scale published in Long-term Intervention with Pravastatin in Ischemic Disease (LIPID) study and determining its predictive capacity when combined with plasma levels of MCP-1, galectin-3 and NT-proBNP in patients with SCAD. A total of 706 patients with SCAD and a history of acute coronary syndrome (ACS) were analyzed over a follow up period of 2.2 ± 0.99 years. The primary endpoint was the occurrence of an ischemic event (any SCA, stroke or transient ischemic attack), heart failure, or death. A clinical risk scale derived from the LIPID study significantly predicted the development of the primary endpoint, with an area under the ROC curve (Receiver Operating Characteristic) of 0.642 (0.579 to 0.705); P<0.001. A composite score was developed by adding the scores of the LIPID and scale decile levels of MCP -1, galectin -3 and NT-proBNP. The predictive value improved with an area under the curve of 0.744 (0.684 to 0.805); P<0.001 (P=0.022 for comparison). A score greater than 21.5 had a sensitivity of 74% and a specificity of 61% for the development of the primary endpoint (P<0.001, log -rank test). Plasma levels of MCP-1, galectin -3 and NT-proBNP improve the ability of the LIPID clinical scale to predict the prognosis of patients with SCAD. Copyright © 2014 Sociedad Española de Arteriosclerosis. Published by Elsevier España. All rights reserved.

  18. Validation of Mean Absolute Sea Level of the North Atlantic obtained from Drifter, Altimetry and Wind Data

    NASA Technical Reports Server (NTRS)

    Maximenko, Nikolai A.

    2003-01-01

    Mean absolute sea level reflects the deviation of the Ocean surface from geoid due to the ocean currents and is an important characteristic of the dynamical state of the ocean. Values of its spatial variations (order of 1 m) are generally much smaller than deviations of the geoid shape from ellipsoid (order of 100 m) that makes the derivation of the absolute mean sea level a difficult task for gravity and satellite altimetry observations. Technique used by Niiler et al. for computation of the absolute mean sea level in the Kuroshio Extension was then developed into more general method and applied by Niiler et al. (2003b) to the global Ocean. The method is based on the consideration of balance of horizontal momentum.

  19. Neuralgia

    MedlinePlus

    ... 2016:chap 107. Scadding JW, Koltzenberg M. Painful peripheral neuropathies. In: McMahon SB, Koltzenburg M, Tracey I, Turk ... PA: Elsevier Saunders; 2013:chap 65. Shy ME. Peripheral neuropathies. In: Goldman L, Schafer AI, eds. Goldman's Cecil ...

  20. 9 CFR 439.20 - Criteria for maintaining accreditation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...

  1. 9 CFR 439.20 - Criteria for maintaining accreditation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...

  2. 9 CFR 439.20 - Criteria for maintaining accreditation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...

  3. 9 CFR 439.20 - Criteria for maintaining accreditation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...

  4. 9 CFR 439.20 - Criteria for maintaining accreditation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...

  5. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    ERIC Educational Resources Information Center

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  6. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    NASA Astrophysics Data System (ADS)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  7. Spontaneous Coronary Artery Dissection

    MedlinePlus

    ... blood vessels. Fibromuscular dysplasia occurs more often in women than it does in men. Extreme physical exercise. People who recently participated in extreme or intense exercises, such as extreme aerobic activities, may be at higher risk of SCAD. Severe ...

  8. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  9. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  10. Determinants and Long-Term Outcomes of Percutaneous Coronary Interventions vs. Surgery for Multivessel Disease According to Clinical Presentation.

    PubMed

    Hong, Sung-Jin; Kim, Byeong-Keuk; Shin, Sanghoon; Suh, Yongsung; Kim, Seunghwan; Ahn, Chul-Min; Kim, Jung-Sun; Ko, Young-Guk; Choi, Donghoon; Hong, Myeong-Ki; Jang, Yangsoo

    2018-03-23

    The long-term outcome of percutaneous coronary intervention (PCI) vs. coronary artery bypass graft (CABG), particularly for patients with non-ST-elevation acute coronary syndrome (NSTE-ACS), remains controversial.Methods and Results:We retrospectively analyzed 2,827 patients (stable coronary artery disease [SCAD], n=1,601; NSTE-ACS, n=1,226) who underwent either PCI (n=1,732) or CABG (n=1,095). The 8-year composite of cardiac death and myocardial infarction (MI) was compared between PCI and CABG before and after propensity matching. For patients with NSTE-ACS, PCI was performed more frequently for those with higher Thrombolysis in Myocardial Infarction risk score and 3-vessel disease, and PCI led to significantly higher 8-year composite of cardiac death and MI than CABG (14.1% vs. 5.9%, hazard ratio [HR]=2.22, 95% confidence interval [CI]=1.37-3.58, P=0.001). There was a significant interaction between clinical presentation and revascularization strategy (P-interaction=0.001). However, after matching, the benefit of CABG vs. PCI was attenuated in patients with NSTE-ACS, whereas it was pronounced in those with SCAD. Interactions between clinical presentation and revascularization strategy were not observed (P-interaction=0.574). Although the determinants of PCI vs. CABG in real-world clinical practice differ according to the clinical presentation, a significant interaction between clinical presentation and revascularization strategy was not noted for long-term outcomes. The revascularization strategy for patients with NSTE-ACS can be based on the criteria applied to patients with SCAD.

  11. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  12. Interactive Visual Least Absolutes Method: Comparison with the Least Squares and the Median Methods

    ERIC Educational Resources Information Center

    Kim, Myung-Hoon; Kim, Michelle S.

    2016-01-01

    A visual regression analysis using the least absolutes method (LAB) was developed, utilizing an interactive approach of visually minimizing the sum of the absolute deviations (SAB) using a bar graph in Excel; the results agree very well with those obtained from nonvisual LAB using a numerical Solver in Excel. These LAB results were compared with…

  13. EVALUATION REPORT SCIENCE APPLICATIONS INTERNATIONAL CORPORATION S-CAD CHEMICAL AGENT DETECTION SYSTEM

    EPA Science Inventory

    The USEPA's National Homeland Security Research Center (NHSRC)Technology Testing and Evaluation Program (TTEP) is carrying out performance tests on homeland security technologies. Under TTEP, Battelle recently evaluated the performance of the Science Applications International Co...

  14. Prognostic models for stable coronary artery disease based on electronic health record cohort of 102 023 patients.

    PubMed

    Rapsomaniki, Eleni; Shah, Anoop; Perel, Pablo; Denaxas, Spiros; George, Julie; Nicholas, Owen; Udumyan, Ruzan; Feder, Gene Solomon; Hingorani, Aroon D; Timmis, Adam; Smeeth, Liam; Hemingway, Harry

    2014-04-01

    The population with stable coronary artery disease (SCAD) is growing but validated models to guide their clinical management are lacking. We developed and validated prognostic models for all-cause mortality and non-fatal myocardial infarction (MI) or coronary death in SCAD. Models were developed in a linked electronic health records cohort of 102 023 SCAD patients from the CALIBER programme, with mean follow-up of 4.4 (SD 2.8) years during which 20 817 deaths and 8856 coronary outcomes were observed. The Kaplan-Meier 5-year risk was 20.6% (95% CI, 20.3, 20.9) for mortality and 9.7% (95% CI, 9.4, 9.9) for non-fatal MI or coronary death. The predictors in the models were age, sex, CAD diagnosis, deprivation, smoking, hypertension, diabetes, lipids, heart failure, peripheral arterial disease, atrial fibrillation, stroke, chronic kidney disease, chronic pulmonary disease, liver disease, cancer, depression, anxiety, heart rate, creatinine, white cell count, and haemoglobin. The models had good calibration and discrimination in internal (external) validation with C-index 0.811 (0.735) for all-cause mortality and 0.778 (0.718) for non-fatal MI or coronary death. Using these models to identify patients at high risk (defined by guidelines as 3% annual mortality) and support a management decision associated with hazard ratio 0.8 could save an additional 13-16 life years or 15-18 coronary event-free years per 1000 patients screened, compared with models with just age, sex, and deprivation. These validated prognostic models could be used in clinical practice to support risk stratification as recommended in clinical guidelines.

  15. Mitochondrial free fatty acid β-oxidation supports oxidative phosphorylation and proliferation in cancer cells.

    PubMed

    Rodríguez-Enríquez, Sara; Hernández-Esquivel, Luz; Marín-Hernández, Alvaro; El Hafidi, Mohammed; Gallardo-Pérez, Juan Carlos; Hernández-Reséndiz, Ileana; Rodríguez-Zavala, José S; Pacheco-Velázquez, Silvia C; Moreno-Sánchez, Rafael

    2015-08-01

    Oxidative phosphorylation (OxPhos) is functional and sustains tumor proliferation in several cancer cell types. To establish whether mitochondrial β-oxidation of free fatty acids (FFAs) contributes to cancer OxPhos functioning, its protein contents and enzyme activities, as well as respiratory rates and electrical membrane potential (ΔΨm) driven by FFA oxidation were assessed in rat AS-30D hepatoma and liver (RLM) mitochondria. Higher protein contents (1.4-3 times) of β-oxidation (CPT1, SCAD) as well as proteins and enzyme activities (1.7-13-times) of Krebs cycle (KC: ICD, 2OGDH, PDH, ME, GA), and respiratory chain (RC: COX) were determined in hepatoma mitochondria vs. RLM. Although increased cholesterol content (9-times vs. RLM) was determined in the hepatoma mitochondrial membranes, FFAs and other NAD-linked substrates were oxidized faster (1.6-6.6 times) by hepatoma mitochondria than RLM, maintaining similar ΔΨm values. The contents of β-oxidation, KC and RC enzymes were also assessed in cells. The mitochondrial enzyme levels in human cervix cancer HeLa and AS-30D cells were higher than those observed in rat hepatocytes whereas in human breast cancer biopsies, CPT1 and SCAD contents were lower than in human breast normal tissue. The presence of CPT1 and SCAD in AS-30D mitochondria and HeLa cells correlated with an active FFA utilization in HeLa cells. Furthermore, the β-oxidation inhibitor perhexiline blocked FFA utilization, OxPhos and proliferation in HeLa and other cancer cells. In conclusion, functional mitochondria supported by FFA β-oxidation are essential for the accelerated cancer cell proliferation and hence anti-β-oxidation therapeutics appears as an alternative promising approach to deter malignant tumor growth. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. AMSSM Position Statement on Cardiovascular Preparticipation Screening in Athletes: Current Evidence, Knowledge Gaps, Recommendations, and Future Directions.

    PubMed

    Drezner, Jonathan A; OʼConnor, Francis G; Harmon, Kimberly G; Fields, Karl B; Asplund, Chad A; Asif, Irfan M; Price, David E; Dimeff, Robert J; Bernhardt, David T; Roberts, William O

    2016-09-01

    Cardiovascular (CV) screening in young athletes is widely recommended and routinely performed before participation in competitive sports. While there is general agreement that early detection of cardiac conditions at risk for sudden cardiac arrest and death (SCA/D) is an important objective, the optimal strategy for CV screening in athletes remains an issue of considerable debate. At the center of the controversy is the addition of a resting electrocardiogram (ECG) to the standard preparticipation evaluation using history and physical examination. The American Medical Society for Sports Medicine (AMSSM) formed a task force to address the current evidence and knowledge gaps regarding preparticipation CV screening in athletes from the perspective of a primary care sports medicine physician. The absence of definitive outcomes-based evidence at this time precludes AMSSM from endorsing any single or universal CV screening strategy for all athletes including legislative mandates. This statement presents a new paradigm to assist the individual physician in assessing the most appropriate CV screening strategy unique to their athlete population, community needs, and resources. The decision to implement a CV screening program, with or without the addition of ECG, necessitates careful consideration of the risk of SCA/D in the targeted population and the availability of cardiology resources and infrastructure. Importantly, it is the individual physician's assessment in the context of an emerging evidence base that the chosen model for early detection of cardiac disorders in the specific population provides greater benefit than harm. American Medical Society for Sports Medicine is committed to advancing evidenced-based research and educational initiatives that will validate and promote the most efficacious strategies to foster safe sport participation and reduce SCA/D in athletes.

  17. AMSSM Position Statement on Cardiovascular Preparticipation Screening in Athletes: Current Evidence, Knowledge Gaps, Recommendations and Future Directions.

    PubMed

    Drezner, Jonathan A; O'Connor, Francis G; Harmon, Kimberly G; Fields, Karl B; Asplund, Chad A; Asif, Irfan M; Price, David E; Dimeff, Robert J; Bernhardt, David T; Roberts, William O

    2016-01-01

    Cardiovascular screening in young athletes is widely recommended and routinely performed prior to participation in competitive sports. While there is general agreement that early detection of cardiac conditions at risk for sudden cardiac arrest and death (SCA/D) is an important objective, the optimal strategy for cardiovascular screening in athletes remains an issue of considerable debate. At the center of the controversy is the addition of a resting electrocardiogram (ECG) to the standard preparticipation evaluation using history and physical examination. The American Medical Society for Sports Medicine (AMSSM) formed a task force to address the current evidence and knowledge gaps regarding preparticipation cardiovascular screening in athletes from the perspective of a primary care sports medicine physician. The absence of definitive outcomes-based evidence at this time precludes AMSSM from endorsing any single or universal cardiovascular screening strategy for all athletes, including legislative mandates. This statement presents a new paradigm to assist the individual physician in assessing the most appropriate cardiovascular screening strategy unique to their athlete population, community needs, and resources. The decision to implement a cardiovascular screening program, with or without the addition of ECG, necessitates careful consideration of the risk of SCA/D in the targeted population and the availability of cardiology resources and infrastructure. Importantly, it is the individual physician's assessment in the context of an emerging evidence-base that the chosen model for early detection of cardiac disorders in the specific population provides greater benefit than harm. AMSSM is committed to advancing evidenced-based research and educational initiatives that will validate and promote the most efficacious strategies to foster safe sport participation and reduce SCA/D in athletes.

  18. Incidence and prevalence of inflammatory bowel diseases in gastroenterology primary care setting.

    PubMed

    Tursi, Antonio; Elisei, Walter; Picchio, Marcello

    2013-12-01

    The incidence of inflammatory bowel diseases (IBDs) has markedly increased over the last years, but no epidemiological study has been performed in gastroenterology primary care setting. We describe the epidemiology of IBD in a gastroenterology primary care unit using its records as the primary data source. Case finding used predefined read codes to systematically search computer diagnostic and prescribing records from January 2009 to December 2012. A specialist diagnosis of Ulcerative colitis (UC), Crohn's disease (CD), inflammatory bowel disease unclassified (IBDU) or segmental colitis associated with diverticulosis (SCAD), based on clinical, histological or radiological findings, was a prerequisite for the inclusion in the study. Secondary, infective and apparent acute self-limiting colitis were excluded. We identified 176 patients with IBD in a population of 94,000 with a prevalence 187.2/100,000 (95% CI: 160.6-217.0). Between 2009 and 2012 there were 61 new cases. In particular, there were 23 new cases of UC, 19 new cases of CD, 15 new cases of SCAD, and 4 new cases of IBDU. The incidence of IBD was 16.2/100,000 (95% CI 12.5-20.7) per year. The incidence per year was 6/100,000 (95% CI 3.8 to 8.9) for UC, 5/100,000 (95% CI 3.0-7.7) for CD, 4/100,000 (95% CI 2.3-6.5) for SCAD, and 1/100,000 (95% CI 0.3-2.6) for IBDU. We assessed for the first time which is the prevalence and incidence of IBD in a gastroenterology primary care unit. This confirms that specialist primary care unit is a key factor in providing early diagnosis of chronic diseases. Copyright © 2013 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  19. Study of the optimum level of electrode placement for the evaluation of absolute lung resistivity with the Mk3.5 EIT system.

    PubMed

    Nebuya, S; Noshiro, M; Yonemoto, A; Tateno, S; Brown, B H; Smallwood, R H; Milnes, P

    2006-05-01

    Inter-subject variability has caused the majority of previous electrical impedance tomography (EIT) techniques to focus on the derivation of relative or difference measures of in vivo tissue resistivity. Implicit in these techniques is the requirement for a reference or previously defined data set. This study assesses the accuracy and optimum electrode placement strategy for a recently developed method which estimates an absolute value of organ resistivity without recourse to a reference data set. Since this measurement of tissue resistivity is absolute, in Ohm metres, it should be possible to use EIT measurements for the objective diagnosis of lung diseases such as pulmonary oedema and emphysema. However, the stability and reproducibility of the method have not yet been investigated fully. To investigate these problems, this study used a Sheffield Mk3.5 system which was configured to operate with eight measurement electrodes. As a result of this study, the absolute resistivity measurement was found to be insensitive to the electrode level between 4 and 5 cm above the xiphoid process. The level of the electrode plane was varied between 2 cm and 7 cm above the xiphoid process. Absolute lung resistivity in 18 normal subjects (age 22.6 +/- 4.9, height 169.1 +/- 5.7 cm, weight 60.6 +/- 4.5 kg, body mass index 21.2 +/- 1.6: mean +/- standard deviation) was measured during both normal and deep breathing for 1 min. Three sets of measurements were made over a period of several days on each of nine of the normal male subjects. No significant differences in absolute lung resistivity were found, either during normal tidal breathing between the electrode levels of 4 and 5 cm (9.3 +/- 2.4 Omega m, 9.6 +/- 1.9 Omega m at 4 and 5 cm, respectively: mean +/- standard deviation) or during deep breathing between the electrode levels of 4 and 5 cm (10.9 +/- 2.9 Omega m and 11.1 +/- 2.3 Omega m, respectively: mean +/- standard deviation). However, the differences in absolute lung resistivity between normal and deep tidal breathing at the same electrode level are significant. No significant difference was found in the coefficient of variation between the electrode levels of 4 and 5 cm (9.5 +/- 3.6%, 8.5 +/- 3.2% at 4 and 5 cm, respectively: mean +/- standard deviation in individual subjects). Therefore, the electrode levels of 4 and 5 cm above the xiphoid process showed reasonable reliability in the measurement of absolute lung resistivity both among individuals and over time.

  20. Myxococcus CsgA, Drosophila Sniffer, and human HSD10 are cardiolipin phospholipases

    PubMed Central

    Boynton, Tye O'Hara; Shimkets, Lawrence Joseph

    2015-01-01

    Myxococcus xanthus development requires CsgA, a member of the short-chain alcohol dehydrogenase (SCAD) family of proteins. We show that CsgA and SocA, a protein that can replace CsgA function in vivo, oxidize the 2′-OH glycerol moiety on cardiolipin and phosphatidylglycerol to produce diacylglycerol (DAG), dihydroxyacetone, and orthophosphate. A lipid extract enriched in DAGs from wild-type cells initiates development and lipid body production in a csgA mutant to bypass the mutational block. This novel phospholipase C-like reaction is widespread. SCADs that prevent neurodegenerative disorders, such as Drosophila Sniffer and human HSD10, oxidize cardiolipin with similar kinetic parameters. HSD10 exhibits a strong preference for cardiolipin with oxidized fatty acids. This activity is inhibited in the presence of the amyloid β peptide. Three HSD10 variants associated with neurodegenerative disorders are inactive with cardiolipin. We suggest that HSD10 protects humans from reactive oxygen species by removing damaged cardiolipin before it induces apoptosis. PMID:26338420

  1. Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling

    EPA Science Inventory

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...

  2. Special electronic distance meter calibration for precise engineering surveying industrial applications

    NASA Astrophysics Data System (ADS)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf

    2015-05-01

    All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.

  3. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  4. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  5. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  6. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  7. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  8. [Status of β-blocker use and heart rate control in Chinese patients with stable coronary artery disease].

    PubMed

    Sun, Yihong; Yu, Jinming; Hu, Dayi

    2016-01-01

    To observe the current status of β-blocker (BB) use and heart rate control in Chinese patients with stable coronary artery disease (SCAD) based on subgroup data of the prospective observational longitudinal registry of patients with stable coronary artery disease (CLARIFY). The CLARIFY study is an international prospective observational registry of outpatients with SCAD. From November 2009 to July 2010, patients with SCAD were enrolled, and demographic information, clinical indicators, medication and blood flow reconstruction were collected. Patients were divided in three mutually exclusive categories by baseline pulse palpation heart rate(HR)≤60 beats per minute (bpm)(n=397), 61-69 bpm(n=782), and ≥70 bpm(n=1 443). The patients were also divided into taking BB or not taking BB groups. The aim of present study is to describe and analyze the current status and factors related to the HR control and BB use in the Chinese subgroup of CLARIFY. A total of 2 622 patients were enrolled from 56 centers across China. The mean age was (63.6±10.3) years old with 75.6% (1 983) male patients, 55.0% (1 443) patients had HR≥70 bpm. Mean HR measure by electrocardiogram(ECG) was (69.4±10.2)bpm, 50.9% (1 334 cases) patients had myocardial infarction(MI) history. A total of 21.9%(575 cases) patients had anginal symptoms; coronary angiography was performed in 88.8%(2 327 cases) of the patients. 76.2%(1 997 cases) patients were treated with BB (any molecule and any dose), 2.7% (70 cases) with digoxin or derivatives, 3.9% (103 cases) with verapamil or diltiazem, and 1.8% (47 cases) with amiodarone or dronedarone and 0.1%(2 cases) received ivabradine. BB use was similar among 3 HR groups(P>0.05). The independent risk factors associated with HR≥70 bpm were diabetes(OR=1.31), current smoker(OR=1.57), chronic heart failure(CHF) with NYHA Ⅲ (OR=2.13) and increased diastolic blood pressure (OR=1.30). Conversely, high physical activity (OR=0.61), former smoker (OR=0.76) and history of percutaneous coronary intervention(PCI, OR=0.80) were associated with lower risk of HR≥70 bpm (all P<0.05). The independent risk factors associated with non-BB use were older age (OR=1.11, 95%CI 1.01-1.47, P=0.005), lower diastolic blood pressure (OR=1.47, 95%CI 1.32-1.68, P=0.012), no history of MI (OR=1.86, 95%CI 1.43-2.44, P<0.001) or PCI (OR=1.94, 95%CI 1.55-3.73, P<0.001), asthma/chronic obstructive pulmonary disease (OR=1.32, 95%CI 1.15-1.99, P<0.001). A total of 76.2% Chinese SCAD patients received BB medication but more than half of them did not reach the optimal HR. Clinical characteristics including diabetes, current smoker, CHF, increased diastolic blood pressure and no PCI were associated with poorly controlled HR(≥70 bpm). More efforts including adjusting the type and dose of heart rate lowering drugs are needed to achieve optimal HR control in Chinese SCAD patients. Clinical Trail Registry International Standard Randomized Controlled Trial, ISRCTN43070564.

  9. Comparison of Penalty Functions for Sparse Canonical Correlation Analysis

    PubMed Central

    Chalise, Prabhakar; Fridley, Brooke L.

    2011-01-01

    Canonical correlation analysis (CCA) is a widely used multivariate method for assessing the association between two sets of variables. However, when the number of variables far exceeds the number of subjects, such in the case of large-scale genomic studies, the traditional CCA method is not appropriate. In addition, when the variables are highly correlated the sample covariance matrices become unstable or undefined. To overcome these two issues, sparse canonical correlation analysis (SCCA) for multiple data sets has been proposed using a Lasso type of penalty. However, these methods do not have direct control over sparsity of solution. An additional step that uses Bayesian Information Criterion (BIC) has also been suggested to further filter out unimportant features. In this paper, a comparison of four penalty functions (Lasso, Elastic-net, SCAD and Hard-threshold) for SCCA with and without the BIC filtering step have been carried out using both real and simulated genotypic and mRNA expression data. This study indicates that the SCAD penalty with BIC filter would be a preferable penalty function for application of SCCA to genomic data. PMID:21984855

  10. Data sets for author name disambiguation: an empirical analysis and a new resource.

    PubMed

    Müller, Mark-Christoph; Reitz, Florian; Roy, Nicolas

    2017-01-01

    Data sets of publication meta data with manually disambiguated author names play an important role in current author name disambiguation (AND) research. We review the most important data sets used so far, and compare their respective advantages and shortcomings. From the results of this review, we derive a set of general requirements to future AND data sets. These include both trivial requirements, like absence of errors and preservation of author order, and more substantial ones, like full disambiguation and adequate representation of publications with a small number of authors and highly variable author names. On the basis of these requirements, we create and make publicly available a new AND data set, SCAD-zbMATH. Both the quantitative analysis of this data set and the results of our initial AND experiments with a naive baseline algorithm show the SCAD-zbMATH data set to be considerably different from existing ones. We consider it a useful new resource that will challenge the state of the art in AND and benefit the AND research community.

  11. Myxococcus CsgA, Drosophila Sniffer, and human HSD10 are cardiolipin phospholipases.

    PubMed

    Boynton, Tye O'Hara; Shimkets, Lawrence Joseph

    2015-09-15

    Myxococcus xanthus development requires CsgA, a member of the short-chain alcohol dehydrogenase (SCAD) family of proteins. We show that CsgA and SocA, a protein that can replace CsgA function in vivo, oxidize the 2'-OH glycerol moiety on cardiolipin and phosphatidylglycerol to produce diacylglycerol (DAG), dihydroxyacetone, and orthophosphate. A lipid extract enriched in DAGs from wild-type cells initiates development and lipid body production in a csgA mutant to bypass the mutational block. This novel phospholipase C-like reaction is widespread. SCADs that prevent neurodegenerative disorders, such as Drosophila Sniffer and human HSD10, oxidize cardiolipin with similar kinetic parameters. HSD10 exhibits a strong preference for cardiolipin with oxidized fatty acids. This activity is inhibited in the presence of the amyloid β peptide. Three HSD10 variants associated with neurodegenerative disorders are inactive with cardiolipin. We suggest that HSD10 protects humans from reactive oxygen species by removing damaged cardiolipin before it induces apoptosis. © 2015 Boynton and Shimkets; Published by Cold Spring Harbor Laboratory Press.

  12. Variable selection in subdistribution hazard frailty models with competing risks data

    PubMed Central

    Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo

    2014-01-01

    The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872

  13. Characterizing Accuracy and Precision of Glucose Sensors and Meters

    PubMed Central

    2014-01-01

    There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194

  14. WISC-III cognitive profiles in children with developmental dyslexia: specific cognitive disability and diagnostic utility.

    PubMed

    Moura, Octávio; Simões, Mário R; Pereira, Marcelino

    2014-02-01

    This study analysed the usefulness of the Wechsler Intelligence Scale for Children-Third Edition in identifying specific cognitive impairments that are linked to developmental dyslexia (DD) and the diagnostic utility of the most common profiles in a sample of 100 Portuguese children (50 dyslexic and 50 normal readers) between the ages of 8 and 12 years. Children with DD exhibited significantly lower scores in the Verbal Comprehension Index (except the Vocabulary subtest), Freedom from Distractibility Index (FDI) and Processing Speed Index subtests, with larger effect sizes than normal readers in Information, Arithmetic and Digit Span. The Verbal-Performance IQs discrepancies, Bannatyne pattern and the presence of FDI; Arithmetic, Coding, Information and Digit Span subtests (ACID) and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profiles (full or partial) in the lowest subtests revealed a low diagnostic utility. However, the receiver operating characteristic curve and the optimal cut-off score analyses of the composite ACID; FDI and SCAD profiles scores showed moderate accuracy in correctly discriminating dyslexic readers from normal ones. These results suggested that in the context of a comprehensive assessment, the Wechsler Intelligence Scale for Children-Third Edition provides some useful information about the presence of specific cognitive disabilities in DD. Practitioner Points. Children with developmental dyslexia revealed significant deficits in the Wechsler Intelligence Scale for Children-Third Edition subtests that rely on verbal abilities, processing speed and working memory. The composite Arithmetic, Coding, Information and Digit Span subtests (ACID); Freedom from Distractibility Index and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profile scores showed moderate accuracy in correctly discriminating dyslexics from normal readers. Wechsler Intelligence Scale for Children-Third Edition may provide some useful information about the presence of specific cognitive disabilities in developmental dyslexia. Copyright © 2013 John Wiley & Sons, Ltd.

  15. AMSSM Position Statement on Cardiovascular Preparticipation Screening in Athletes: Current evidence, knowledge gaps, recommendations and future directions.

    PubMed

    Drezner, Jonathan A; O'Connor, Francis G; Harmon, Kimberly G; Fields, Karl B; Asplund, Chad A; Asif, Irfan M; Price, David E; Dimeff, Robert J; Bernhardt, David T; Roberts, William O

    2017-02-01

    Cardiovascular screening in young athletes is widely recommended and routinely performed prior to participation in competitive sports. While there is general agreement that early detection of cardiac conditions at risk for sudden cardiac arrest and death (SCA/D) is an important objective, the optimal strategy for cardiovascular screening in athletes remains an issue of considerable debate. At the centre of the controversy is the addition of a resting ECG to the standard preparticipation evaluation using history and physical examination. The American Medical Society for Sports Medicine (AMSSM) formed a task force to address the current evidence and knowledge gaps regarding preparticipation cardiovascular screening in athletes from the perspective of a primary care sports medicine physician. The absence of definitive outcome-based evidence at this time precludes AMSSM from endorsing any single or universal cardiovascular screening strategy for all athletes, including legislative mandates. This statement presents a new paradigm to assist the individual physician in assessing the most appropriate cardiovascular screening strategy unique to their athlete population, community needs and resources. The decision to implement a cardiovascular screening programme, with or without the addition of ECG, necessitates careful consideration of the risk of SCA/D in the targeted population and the availability of cardiology resources and infrastructure. Importantly, it is the individual physician's assessment in the context of an emerging evidence base that the chosen model for early detection of cardiac disorders in the specific population provides greater benefit than harm. AMSSM is committed to advancing evidenced-based research and educational initiatives that will validate and promote the most efficacious strategies to foster safe sport participation and reduce SCA/D in athletes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  17. Patient-specific positioning guides for total knee arthroplasty: no significant difference between final component alignment and pre-operative digital plan except for tibial rotation.

    PubMed

    Boonen, Bert; Schotanus, Martijn G M; Kerens, Bart; Hulsmans, Frans-Jan; Tuinebreijer, Wim E; Kort, Nanne P

    2017-09-01

    To assess whether there is a significant difference between the alignment of the individual femoral and tibial components (in the frontal, sagittal and horizontal planes) as calculated pre-operatively (digital plan) and the actually achieved alignment in vivo obtained with the use of patient-specific positioning guides (PSPGs) for TKA. It was hypothesised that there would be no difference between post-op implant position and pre-op digital plan. Twenty-six patients were included in this non-inferiority trial. Software permitted matching of the pre-operative MRI scan (and therefore calculated prosthesis position) to a pre-operative CT scan and then to a post-operative full-leg CT scan to determine deviations from pre-op planning in all three anatomical planes. For the femoral component, mean absolute deviations from planning were 1.8° (SD 1.3), 2.5° (SD 1.6) and 1.6° (SD 1.4) in the frontal, sagittal and transverse planes, respectively. For the tibial component, mean absolute deviations from planning were 1.7° (SD 1.2), 1.7° (SD 1.5) and 3.2° (SD 3.6) in the frontal, sagittal and transverse planes, respectively. Absolute mean deviation from planned mechanical axis was 1.9°. The a priori specified null hypothesis for equivalence testing: the difference from planning is >3 or <-3 was rejected for all comparisons except for the tibial transverse plane. PSPG was able to adequately reproduce the pre-op plan in all planes, except for the tibial rotation in the transverse plane. Possible explanations for outliers are discussed and highlight the importance for adequate training surgeons before they start using PSPG in their day-by-day practise. Prospective cohort study, Level II.

  18. Adjusting head circumference for covariates in autism: clinical correlates of a highly heritable continuous trait.

    PubMed

    Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J; Murtha, Michael T; Hus, Vanessa; Lowe, Jennifer K; Willsey, A Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E; Ledbetter, David H; Lord, Catherine; Mane, Shrikant M; Lese Martin, Christa; Martin, Donna M; Morrow, Eric M; Walsh, Christopher A; Sutcliffe, James S; State, Matthew W; Devlin, Bernie; Cook, Edwin H; Kim, Soo-Jeong

    2013-10-15

    Brain development follows a different trajectory in children with autism spectrum disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. Gender, age, height, weight, genetic ancestry, and ASD status were significant predictors of HC (estimate of the ASD effect = .2 cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait, and population norms for HC would be far more accurate if covariates including genetic ancestry, height, and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. © 2013 Society of Biological Psychiatry.

  19. Adjusting head circumference for covariates in autism: clinical correlates of a highly heritable continuous trait

    PubMed Central

    Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong

    2013-01-01

    BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936

  20. Detection and evaluation of DNA methylation markers found at SCGN and KLF14 loci to estimate human age.

    PubMed

    Alghanim, Hussain; Antunes, Joana; Silva, Deborah Soares Bispo Santos; Alho, Clarice Sampaio; Balamurugan, Kuppareddi; McCord, Bruce

    2017-11-01

    Recent developments in the analysis of epigenetic DNA methylation patterns have demonstrated that certain genetic loci show a linear correlation with chronological age. It is the goal of this study to identify a new set of epigenetic methylation markers for the forensic estimation of human age. A total number of 27 CpG sites at three genetic loci, SCGN, DLX5 and KLF14, were examined to evaluate the correlation of their methylation status with age. These sites were evaluated using 72 blood samples and 91 saliva samples collected from volunteers with ages ranging from 5 to 73 years. DNA was bisulfite modified followed by PCR amplification and pyrosequencing to determine the level of DNA methylation at each CpG site. In this study, certain CpG sites in SCGN and KLF14 loci showed methylation levels that were correlated with chronological age, however, the tested CpG sites in DLX5 did not show a correlation with age. Using a 52-saliva sample training set, two age-predictor models were developed by means of a multivariate linear regression analysis for age prediction. The two models performed similarly with a single-locus model explaining 85% of the age variance at a mean absolute deviation of 5.8 years and a dual-locus model explaining 84% of the age variance with a mean absolute deviation of 6.2 years. In the validation set, the mean absolute deviation was measured to be 8.0 years and 7.1 years for the single- and dual-locus model, respectively. Another age predictor model was also developed using a 40-blood sample training set that accounted for 71% of the age variance. This model gave a mean absolute deviation of 6.6 years for the training set and 10.3years for the validation set. The results indicate that specific CpGs in SCGN and KLF14 can be used as potential epigenetic markers to estimate age using saliva and blood specimens. These epigenetic markers could provide important information in cases where the determination of a suspect's age is critical in developing investigative leads. Copyright © 2017. Published by Elsevier B.V.

  1. A critical assessment of two types of personal UV dosimeters.

    PubMed

    Seckmeyer, Gunther; Klingebiel, Marcus; Riechelmann, Stefan; Lohse, Insa; McKenzie, Richard L; Liley, J Ben; Allen, Martin W; Siani, Anna-Maria; Casale, Giuseppe R

    2012-01-01

    Doses of erythemally weighted irradiances derived from polysulphone (PS) and electronic ultraviolet (EUV) dosimeters have been compared with measurements obtained using a reference spectroradiometer. PS dosimeters showed mean absolute deviations of 26% with a maximum deviation of 44%, the calibrated EUV dosimeters showed mean absolute deviations of 15% (maximum 33%) around noon during several test days in the northern hemisphere autumn. In the case of EUV dosimeters, measurements with various cut-off filters showed that part of the deviation from the CIE erythema action spectrum was due to a small, but significant sensitivity to visible radiation that varies between devices and which may be avoided by careful preselection. Usually the method of calibrating UV sensors by direct comparison to a reference instrument leads to reliable results. However, in some circumstances the quality of measurements made with simple sensors may be over-estimated. In the extreme case, a simple pyranometer can be used as a UV instrument, providing acceptable results for cloudless skies, but very poor results under cloudy conditions. It is concluded that while UV dosimeters are useful for their design purpose, namely to estimate personal UV exposures, they should not be regarded as an inexpensive replacement for meteorological grade instruments. © 2011 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2011 The American Society of Photobiology.

  2. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  3. Fish consumption pattern among adults of different ethnics in Peninsular Malaysia

    PubMed Central

    Ahmad, Nurul Izzah; Wan Mahiyuddin, Wan Rozita; Tengku Mohamad, Tengku Rozaina; Ling, Cheong Yoon; Daud, Siti Fatimah; Hussein, Nasriyah Che; Abdullah, Nor Aini; Shaharudin, Rafiza; Sulaiman, Lokman Hakim

    2016-01-01

    Background Understanding different patterns of fish consumption is an important component for risk assessment of contaminants in fish. A few studies on food consumption had been conducted in Malaysia, but none of them focused specifically on fish consumption. The objectives of this study were to document the meal pattern among three major ethnics in Malaysia with respect to fish/seafood consumption, identify most frequently consumed fish and cooking method, and examine the influence of demographic factors on pattern of fish consumption among study subjects. Methods A cross-sectional survey was conducted between February 2008 and May 2009 to investigate patterns of fish consumption among Malaysian adults in Peninsular Malaysia. Adults aged 18 years and above were randomly selected and fish consumption data were collected using a 3-day prospective food diary. Results A total of 2,675 subjects, comprising male (44.2%) and female (55.7%) participants from major ethnics (Malays, 76.9%; Chinese, 14.7%; Indians, 8.3%) with a mean age of 43.4±16.2 years, were involved in this study. The results revealed 10 most frequently consumed marine fish in descending order: Indian mackerel, anchovy, yellowtail and yellow-stripe scads, tuna, sardines, torpedo scad, Indian and short-fin scads, pomfret, red snapper, and king mackerel. Prawn and squid were also among the most preferred seafood by study subjects. The most frequently consumed freshwater fish were freshwater catfish and snakehead. The most preferred cooking style by Malaysians was deep-fried fish, followed by fish cooked in thick and/or thin chili gravy, fish curry, and fish cooked with coconut milk mixed with other spices and flavorings. Overall, Malaysians consumed 168 g/day fish, with Malay ethnics’ (175±143 g/day) consumption of fish significantly (p<0.001) higher compared with the other two ethnic groups (Chinese=152±133 g/day, Indians=136±141 g/day). Conclusion Fish consumption was significantly associated with ethnicity, age, marital status, residential area, and years of education of adults in Peninsular Malaysia, and the data collected are beneficial for the purpose of health risk assessment on the intake of contaminants through fish/seafood consumption. PMID:27534846

  4. Fish consumption pattern among adults of different ethnics in Peninsular Malaysia.

    PubMed

    Ahmad, Nurul Izzah; Wan Mahiyuddin, Wan Rozita; Tengku Mohamad, Tengku Rozaina; Ling, Cheong Yoon; Daud, Siti Fatimah; Hussein, Nasriyah Che; Abdullah, Nor Aini; Shaharudin, Rafiza; Sulaiman, Lokman Hakim

    2016-01-01

    Understanding different patterns of fish consumption is an important component for risk assessment of contaminants in fish. A few studies on food consumption had been conducted in Malaysia, but none of them focused specifically on fish consumption. The objectives of this study were to document the meal pattern among three major ethnics in Malaysia with respect to fish/seafood consumption, identify most frequently consumed fish and cooking method, and examine the influence of demographic factors on pattern of fish consumption among study subjects. A cross-sectional survey was conducted between February 2008 and May 2009 to investigate patterns of fish consumption among Malaysian adults in Peninsular Malaysia. Adults aged 18 years and above were randomly selected and fish consumption data were collected using a 3-day prospective food diary. A total of 2,675 subjects, comprising male (44.2%) and female (55.7%) participants from major ethnics (Malays, 76.9%; Chinese, 14.7%; Indians, 8.3%) with a mean age of 43.4±16.2 years, were involved in this study. The results revealed 10 most frequently consumed marine fish in descending order: Indian mackerel, anchovy, yellowtail and yellow-stripe scads, tuna, sardines, torpedo scad, Indian and short-fin scads, pomfret, red snapper, and king mackerel. Prawn and squid were also among the most preferred seafood by study subjects. The most frequently consumed freshwater fish were freshwater catfish and snakehead. The most preferred cooking style by Malaysians was deep-fried fish, followed by fish cooked in thick and/or thin chili gravy, fish curry, and fish cooked with coconut milk mixed with other spices and flavorings. Overall, Malaysians consumed 168 g/day fish, with Malay ethnics' (175±143 g/day) consumption of fish significantly (p<0.001) higher compared with the other two ethnic groups (Chinese=152±133 g/day, Indians=136±141 g/day). Fish consumption was significantly associated with ethnicity, age, marital status, residential area, and years of education of adults in Peninsular Malaysia, and the data collected are beneficial for the purpose of health risk assessment on the intake of contaminants through fish/seafood consumption.

  5. Relation of fish oil supplementation to markers of atherothrombotic risk in patients with cardiovascular disease not receiving lipid-lowering therapy.

    PubMed

    Franzese, Christopher J; Bliden, Kevin P; Gesheff, Martin G; Pandya, Shachi; Guyer, Kirk E; Singla, Anand; Tantry, Udaya S; Toth, Peter P; Gurbel, Paul A

    2015-05-01

    Fish oil supplementation (FOS) is known to have cardiovascular benefits. However, the effects of FOS on thrombosis are incompletely understood. We sought to determine if the use of FOS is associated with lower indices of atherothrombotic risk in patients with suspected coronary artery disease (sCAD). This is a subgroup analysis of consecutive patients with sCAD (n=600) enrolled in the Multi-Analyte, Thrombogenic, and Genetic Markers of Atherosclerosis study. Patients on FOS were compared with patients not on FOS. Lipid profile was determined by vertical density gradient ultracentrifugation (n=520), eicosapentaenoic acid+docosahexaenoic acid was measured by gas chromatography (n=437), and AtherOx testing was performed by immunoassay (n=343). Thromboelastography (n=419), ADP- and collagen-induced platelet aggregation (n=137), and urinary 11-dehydrothromboxane B2 levels (n=259) were performed immediately before elective coronary angiography. In the total population, FOS was associated with higher eicosapentaenoic acid+docosahexaenoic acid content (p<0.001), lower triglycerides (p=0.04), total very low-density lipoprotein cholesterol (p=0.002), intermediate-density lipoprotein cholesterol (p=0.02), and AtherOx levels (p=0.02) but not in patients on lipid-lowering therapy. Patients not on lipid-lowering therapy taking FOS had lower very low-density lipoprotein cholesterol, intermediate-density lipoprotein cholesterol, remnant lipoproteins, triglycerides, low-density lipoprotein cholesterol, AtherOx levels, collagen-induced platelet aggregation, thrombin-induced platelet-fibrin clot strength, and shear elasticity (p<0.03 for all). In clopidogrel-treated patients, there was no difference in ADP-induced aggregation between FOS groups. Patients on FOS had lower urinary 11-dehydrothromboxane B2 levels regardless of lipid-lowering therapy (p<0.04). In conclusion, the findings of this study support the potential benefit of FOS for atherothrombotic risk reduction in sCAD with the greatest benefit in patients not receiving lipid-lowering therapy. Future prospective studies to compare FOS with lipid-lowering therapy and to assess the independent effects of FOS on thrombogenicity are needed. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  7. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    NASA Astrophysics Data System (ADS)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  8. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  9. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  10. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  11. Absolute Magnitude Calibration for Dwarfs Based on the Colour-Magnitude Diagrams of Galactic Clusters

    NASA Astrophysics Data System (ADS)

    Karaali, S.; Gökçe, E. Yaz; Bilir, S.; Güçtekin, S. Tunçel

    2014-07-01

    We present two absolute magnitude calibrations for dwarfs based on colour-magnitude diagrams of Galactic clusters. The combination of the Mg absolute magnitudes of the dwarf fiducial sequences of the clusters M92, M13, M5, NGC 2420, M67, and NGC 6791 with the corresponding metallicities provides absolute magnitude calibration for a given (g - r)0 colour. The calibration is defined in the colour interval 0.25 ≤ (g - r)0 ≤ 1.25 mag and it covers the metallicity interval - 2.15 ≤ [Fe/H] ≤ +0.37 dex. The absolute magnitude residuals obtained by the application of the procedure to another set of Galactic clusters lie in the interval - 0.15 ≤ ΔMg ≤ +0.12 mag. The mean and standard deviation of the residuals are < ΔMg > = - 0.002 and σ = 0.065 mag, respectively. The calibration of the MJ absolute magnitude in terms of metallicity is carried out by using the fiducial sequences of the clusters M92, M13, 47 Tuc, NGC 2158, and NGC 6791. It is defined in the colour interval 0.90 ≤ (V - J)0 ≤ 1.75 mag and it covers the same metallicity interval of the Mg calibration. The absolute magnitude residuals obtained by the application of the procedure to the cluster M5 ([Fe/H] = -1.40 dex) and 46 solar metallicity, - 0.45 ≤ [Fe/H] ≤ +0.35 dex, field stars lie in the interval - 0.29 and + 0.35 mag. However, the range of 87% of them is rather shorter, - 0.20 ≤ ΔMJ ≤ +0.20 mag. The mean and standard deviation of all residuals are < ΔMJ > =0.05 and σ = 0.13 mag, respectively. The derived relations are applicable to stars older than 4 Gyr for the Mg calibration, and older than 2 Gyr for the MJ calibration. The cited limits are the ages of the youngest calibration clusters in the two systems.

  12. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the mirror is slightly tilted. Hence, one can determine the amount and direction of tilt from the coordinates of the target image on the viewing plane.

  13. Resting-State Oscillatory Activity in Children Born Small for Gestational Age: An MEG Study

    PubMed Central

    Boersma, Maria; de Bie, Henrica M. A.; Oostrom, Kim J.; van Dijk, Bob W.; Hillebrand, Arjan; van Wijk, Bernadette C. M.; Delemarre-van de Waal, Henriëtte A.; Stam, Cornelis J.

    2013-01-01

    Growth restriction in utero during a period that is critical for normal growth of the brain, has previously been associated with deviations in cognitive abilities and brain anatomical and functional changes. We measured magnetoencephalography (MEG) in 4- to 7-year-old children to test if children born small for gestational age (SGA) show deviations in resting-state brain oscillatory activity. Children born SGA with postnatally spontaneous catch-up growth [SGA+; six boys, seven girls; mean age 6.3 year (SD = 0.9)] and children born appropriate for gestational age [AGA; seven boys, three girls; mean age 6.0 year (SD = 1.2)] participated in a resting-state MEG study. We calculated absolute and relative power spectra and used non-parametric statistics to test for group differences. SGA+ and AGA born children showed no significant differences in absolute and relative power except for reduced absolute gamma band power in SGA children. At the time of MEG investigation, SGA+ children showed significantly lower head circumference (HC) and a trend toward lower IQ, however there was no association of HC or IQ with absolute or relative power. Except for reduced absolute gamma band power, our findings suggest normal brain activity patterns at school age in a group of children born SGA in which spontaneous catch-up growth of bodily length after birth occurred. Although previous findings suggest that being born SGA alters brain oscillatory activity early in neonatal life, we show that these neonatal alterations do not persist at early school age when spontaneous postnatal catch-up growth occurs after birth. PMID:24068993

  14. A Review of Strategic Mobility Models and Analysis

    DTIC Science & Technology

    1991-01-01

    Logistics Directorate of the Joint Staff, (JS-J-4) specifically by the Studies , Concepts, and Analysis Division (SCAD), which conducts long-range...their analysis objec- tives. This study was designed to assist the Logistics Directorate of the Joint Staff (JS/J-4) to understand and improve the...This study concentrated on resource planning, which is the type of planning performed by the Logistics Directorate’s Studies , Concepts, and Analysis

  15. Sparse generalized linear model with L0 approximation for feature selection and prediction with big omics data.

    PubMed

    Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P

    2017-01-01

    Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.

  16. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  17. How well do static electronic dipole polarizabilities from gas-phase experiments compare with density functional and MP2 computations?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakkar, Ajit J., E-mail: ajit@unb.ca; Wu, Taozhe

    2015-10-14

    Static electronic dipole polarizabilities for 135 molecules are calculated using second-order Møller-Plesset perturbation theory and six density functionals recently recommended for polarizabilities. Comparison is made with the best gas-phase experimental data. The lowest mean absolute percent deviations from the best experimental values for all 135 molecules are 3.03% and 3.08% for the LC-τHCTH and M11 functionals, respectively. Excluding the eight extreme outliers for which the experimental values are almost certainly in error, the mean absolute percent deviation for the remaining 127 molecules drops to 2.42% and 2.48% for the LC-τHCTH and M11 functionals, respectively. Detailed comparison enables us to identifymore » 32 molecules for which the discrepancy between the calculated and experimental values warrants further investigation.« less

  18. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  19. Geometric Verification of Dynamic Wave Arc Delivery With the Vero System Using Orthogonal X-ray Fluoroscopic Imaging.

    PubMed

    Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Gevaert, Thierry; Depuydt, Tom; Tournel, Koen; Hung, Cecilia; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark

    2015-07-15

    The purpose of this study was to define an independent verification method based on on-board orthogonal fluoroscopy to determine the geometric accuracy of synchronized gantry-ring (G/R) rotations during dynamic wave arc (DWA) delivery available on the Vero system. A verification method for DWA was developed to calculate O-ring-gantry (G/R) positional information from ball-bearing positions retrieved from fluoroscopic images of a cubic phantom acquired during DWA delivery. Different noncoplanar trajectories were generated in order to investigate the influence of path complexity on delivery accuracy. The G/R positions detected from the fluoroscopy images (DetPositions) were benchmarked against the G/R angulations retrieved from the control points (CP) of the DWA RT plan and the DWA log files recorded by the treatment console during DWA delivery (LogActed). The G/R rotational accuracy was quantified as the mean absolute deviation ± standard deviation. The maximum G/R absolute deviation was calculated as the maximum 3-dimensional distance between the CP and the closest DetPositions. In the CP versus DetPositions comparison, an overall mean G/R deviation of 0.13°/0.16° ± 0.16°/0.16° was obtained, with a maximum G/R deviation of 0.6°/0.2°. For the LogActed versus DetPositions evaluation, the overall mean deviation was 0.08°/0.15° ± 0.10°/0.10° with a maximum G/R of 0.3°/0.4°. The largest decoupled deviations registered for gantry and ring were 0.6° and 0.4° respectively. No directional dependence was observed between clockwise and counterclockwise rotations. Doubling the dose resulted in a double number of detected points around each CP, and an angular deviation reduction in all cases. An independent geometric quality assurance approach was developed for DWA delivery verification and was successfully applied on diverse trajectories. Results showed that the Vero system is capable of following complex G/R trajectories with maximum deviations during DWA below 0.6°. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    PubMed

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  1. Cohesive energy and structural parameters of binary oxides of groups IIA and IIIB from diffusion quantum Monte Carlo

    DOE PAGES

    Santana, Juan A.; Krogel, Jaron T.; Kent, Paul R. C.; ...

    2016-05-03

    We have applied the diffusion quantum Monte Carlo (DMC) method to calculate the cohesive energy and the structural parameters of the binary oxides CaO, SrO, BaO, Sc 2O 3, Y 2O 3 and La 2O 3. The aim of our calculations is to systematically quantify the accuracy of the DMC method to study this type of metal oxides. The DMC results were compared with local and semi-local Density Functional Theory (DFT) approximations as well as with experimental measurements. The DMC method yields cohesive energies for these oxides with a mean absolute deviation from experimental measurements of 0.18(2) eV, while withmore » local and semi-local DFT approximations the deviation is 3.06 and 0.94 eV, respectively. For lattice constants, the mean absolute deviation in DMC, local and semi-local DFT approximations, are 0.017(1), 0.07 and 0.05 , respectively. In conclusion, DMC is highly accurate method, outperforming the local and semi-local DFT approximations in describing the cohesive energies and structural parameters of these binary oxides.« less

  2. A determination of the absolute radiant energy of a Robertson-Berger meter sunburn unit

    NASA Astrophysics Data System (ADS)

    DeLuisi, John J.; Harris, Joyce M.

    Data from a Robertson-Berger (RB) sunburn meter were compared with concurrent measurements obtained with an ultraviolet double monochromator (DM), and the absolute energy of one sunburn unit measured by the RB-meter was determined. It was found that at a solar zenith angle of 30° one sunburn unit (SU) is equivalent to 35 ± 4 mJ cm -2, and at a solar zenith angle of 69°, one SU is equivalent to 20 ± 2 mJ cm -2 (relative to a wavelength of 297 nm), where the rate of change is non-linear. The deviation is due to the different response functions of the RB-meter and the DM system used to simulate the response of human skin to the incident u.v. solar spectrum. The average growth rate of the deviation with increasing solar zenith angle was found to be 1.2% per degree between solar zenith angles 30 and 50° and 2.3% per degree between solar zenith angles 50 and 70°. The deviations of response with solar zenith angle were found to be consistent with reported RB-meter characteristics.

  3. A vibration-insensitive optical cavity and absolute determination of its ultrahigh stability.

    PubMed

    Zhao, Y N; Zhang, J; Stejskal, A; Liu, T; Elman, V; Lu, Z H; Wang, L J

    2009-05-25

    We use the three-cornered-hat method to evaluate the absolute frequency stabilities of three different ultrastable reference cavities, one of which has a vibration-insensitive design that does not even require vibration isolation. An Nd:YAG laser and a diode laser are implemented as light sources. We observe approximately 1 Hz beat note linewidths between all three cavities. The measurement demonstrates that the vibration-insensitive cavity has a good frequency stability over the entire measurement time from 100 ms to 200 s. An absolute, correlation-removed Allan deviation of 1.4 x 10(-15) at s of this cavity is obtained, giving a frequency uncertainty of only 0.44 Hz.

  4. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  5. Simulation and analysis of spectroscopic filter of rotational Raman lidar for absolute measurement of atmospheric temperature

    NASA Astrophysics Data System (ADS)

    Li, Qimeng; Li, Shichun; Hu, Xianglong; Zhao, Jing; Xin, Wenhui; Song, Yuehui; Hua, Dengxin

    2018-01-01

    The absolute measurement technique for atmospheric temperature can avoid the calibration process and improve the measurement accuracy. To achieve the rotational Raman temperature lidar of absolute measurement, the two-stage parallel multi-channel spectroscopic filter combined a first-order blazed grating with a fiber Bragg grating is designed and its performance is tested. The parameters and the optical path structure of the core cascaded-device (micron-level fiber array) are optimized, the optical path of the primary spectroscope is simulated and the maximum centrifugal distortion of the rotational Raman spectrum is approximately 0.0031 nm, the centrifugal ratio of 0.69%. The experimental results show that the channel coefficients of the primary spectroscope are 0.67, 0.91, 0.67, 0.75, 0.82, 0.63, 0.87, 0.97, 0.89, 0.87 and 1 by using the twelfth channel as a reference and the average FWHM is about 0.44 nm. The maximum deviation between the experimental wavelength and the theoretical value is approximately 0.0398 nm, with the deviation degree of 8.86%. The effective suppression to elastic scattering signal are 30.6, 35.2, 37.1, 38.4, 36.8, 38.2, 41.0, 44.3, 44.0, 46.7 dB. That means, combined with the second spectroscope, the suppression at least is up to 65 dB. Therefore we can fine extract single rotational Raman line to achieve the absolute measurement technique.

  6. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    PubMed

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  7. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    PubMed Central

    Tan, Lilong; Yan, Shuhua

    2018-01-01

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897

  8. National Economic Development Procedures Manual - Recreation. Volume 2. A Guide for Using the Contingent Value Methodology in Recreation Studies,

    DTIC Science & Technology

    1986-03-01

    Directly from Sample Bid VI-16 Example 3 VI-16 Determining the Zero Price Qiantity Demanded VI-26 Summary VI -31 CHAPrER VII, THE DETERMINATION OF NED...While the standard deviation and variance are absolute measures of dispersion, a relative measure of dispersion can also be computed. This measure is...refers to the closeness of fit between the estimates obtained from Zli e and the true population value. The only way of being absolutely i: o-.iat the

  9. Retrieval of Aerosol Optical Properties from Ground-Based Remote Sensing Measurements: Aerosol Asymmetry Factor and Single Scattering Albedo

    NASA Astrophysics Data System (ADS)

    Qie, L.; Li, Z.; Li, L.; Li, K.; Li, D.; Xu, H.

    2018-04-01

    The Devaux-Vermeulen-Li method (DVL method) is a simple approach to retrieve aerosol optical parameters from the Sun-sky radiance measurements. This study inherited the previous works of retrieving aerosol single scattering albedo (SSA) and scattering phase function, the DVL method was modified to derive aerosol asymmetric factor (g). To assess the algorithm performance at various atmospheric aerosol conditions, retrievals from AERONET observations were implemented, and the results are compared with AERONET official products. The comparison shows that both the DVL SSA and g were well correlated with those of AERONET. The RMSD and the absolute value of MBD deviations between the SSAs are 0.025 and 0.015 respectively, well below the AERONET declared SSA uncertainty of 0.03 for all wavelengths. For asymmetry factor g, the RMSD deviations are smaller than 0.02 and the absolute values of MBDs smaller than 0.01 at 675, 870 and 1020 nm bands. Then, considering several factors probably affecting retrieval quality (i.e. the aerosol optical depth (AOD), the solar zenith angle, and the sky residual error, sphericity proportion and Ångström exponent), the deviations for SSA and g of these two algorithms were calculated at varying value intervals. Both the SSA and g deviations were found decrease with the AOD and the solar zenith angle, and increase with sky residual error. However, the deviations do not show clear sensitivity to the sphericity proportion and Ångström exponent. This indicated that the DVL algorithm is available for both large, non-spherical particles and spherical particles. The DVL results are suitable for the evaluation of aerosol direct radiative effects of different aerosol types.

  10. In the presence of fluoride, free Sc³⁺ is not a good predictor of Sc bioaccumulation by two unicellular algae: possible role of fluoro-complexes.

    PubMed

    Crémazy, Anne; Campbell, Peter G C; Fortin, Claude

    2014-08-19

    We investigated the effect of fluoride complexation on scandium accumulation by two unicellular algae, Chlamydomonas reinhardtii and Pseudokirchneriella subcapitata. This trivalent metal was selected for its chemical similarities with aluminum and for its convenient radioisotope (Sc-46), which can be used as a tracer in short-term bioaccumulation studies. Scandium surface-bound concentrations (Sc(ads)) and uptake fluxes (J(int)) were estimated in the two algae over short-term (<1 h) exposures at pH 5 and in the presence of 0 to 40 μM F(-). Although the computed proportion of dissolved Sc(3+) dropped from 20% to 0.01% over this [F(-)] range, Sc(ads) and J(int) values for both algae decreased only slightly, suggesting a participation of Sc fluoro-complexes in both processes. Surface adsorption and uptake of fluoride complexes with aluminum have been reported in the literature. These observations are not taken into account by current models for trace metal bioaccumulation (e.g., the biotic ligand model). Results from a previous study, where the effects of pH on Sc uptake were investigated, suggested that Sc hydroxo-complexes were internalized by C. reinhardtii. There is thus growing evidence that the free ion concentration may not be adequate to predict the accumulation of Sc (and potentially of other trivalent metals) in aquatic organisms.

  11. Inverse Monte Carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser Doppler flowmetry.

    PubMed

    Fredriksson, Ingemar; Burdakov, Oleg; Larsson, Marcus; Strömberg, Tomas

    2013-12-01

    The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion are estimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry (LDF). The DRS spectra (450 to 850 nm) are assessed at two source-detector separations (0.4 and 1.2 mm), allowing for a relative calibration routine, whereas LDF spectra are assessed at 1.2 mm in the same fiber-optic probe. Data are analyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayered tissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information. Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluate the method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1 percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and 23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations are presented. The method offers several advantages such as simultaneous quantification of RBC tissue fraction and oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speed resolved, absolute (% RBC×mm/s), and more accurate due to the combination with DRS.

  12. Inverse Monte Carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser Doppler flowmetry

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Burdakov, Oleg; Larsson, Marcus; Strömberg, Tomas

    2013-12-01

    The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion are estimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry (LDF). The DRS spectra (450 to 850 nm) are assessed at two source-detector separations (0.4 and 1.2 mm), allowing for a relative calibration routine, whereas LDF spectra are assessed at 1.2 mm in the same fiber-optic probe. Data are analyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayered tissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information. Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluate the method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1 percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and 23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations are presented. The method offers several advantages such as simultaneous quantification of RBC tissue fraction and oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speed resolved, absolute (% RBC×mm/s), and more accurate due to the combination with DRS.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  14. Single-breath diffusing capacity for carbon monoxide instrument accuracy across 3 health systems.

    PubMed

    Hegewald, Matthew J; Markewitz, Boaz A; Wilson, Emily L; Gallo, Heather M; Jensen, Robert L

    2015-03-01

    Measuring diffusing capacity of the lung for carbon monoxide (DLCO) is complex and associated with wide intra- and inter-laboratory variability. Increased D(LCO) variability may have important clinical consequences. The objective of the study was to assess instrument performance across hospital pulmonary function testing laboratories using a D(LCO) simulator that produces precise and repeatable D(LCO) values. D(LCO) instruments were tested with CO gas concentrations representing medium and high range D(LCO) values. The absolute difference between observed and target D(LCO) value was used to determine measurement accuracy; accuracy was defined as an average deviation from the target value of < 2.0 mL/min/mm Hg. Accuracy of inspired volume measurement and gas sensors were also determined. Twenty-three instruments were tested across 3 healthcare systems. The mean absolute deviation from the target value was 1.80 mL/min/mm Hg (range 0.24-4.23) with 10 of 23 instruments (43%) being inaccurate. High volume laboratories performed better than low volume laboratories, although the difference was not significant. There was no significant difference among the instruments by manufacturers. Inspired volume was not accurate in 48% of devices; mean absolute deviation from target value was 3.7%. Instrument gas analyzers performed adequately in all instruments. D(LCO) instrument accuracy was unacceptable in 43% of devices. Instrument inaccuracy can be primarily attributed to errors in inspired volume measurement and not gas analyzer performance. D(LCO) instrument performance may be improved by regular testing with a simulator. Caution should be used when comparing D(LCO) results reported from different laboratories. Copyright © 2015 by Daedalus Enterprises.

  15. Improving Spleen Volume Estimation via Computer Assisted Segmentation on Clinically Acquired CT Scans

    PubMed Central

    Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.

    2016-01-01

    OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156

  16. Dual-Polarization Observations of Slowly Varying Solar Emissions from a Mobile X-Band Radar

    PubMed Central

    Gabella, Marco; Leuenberger, Andreas

    2017-01-01

    The radio noise that comes from the Sun has been reported in literature as a reference signal to check the quality of dual-polarization weather radar receivers for the S-band and C-band. In most cases, the focus was on relative calibration: horizontal and vertical polarizations were evaluated versus the reference signal mainly in terms of standard deviation of the difference. This means that the investigated radar receivers were able to reproduce the slowly varying component of the microwave signal emitted by the Sun. A novel method, aimed at the absolute calibration of dual-polarization receivers, has recently been presented and applied for the C-band. This method requires the antenna beam axis to be pointed towards the center of the Sun for less than a minute. Standard deviations of the difference as low as 0.1 dB have been found for the Swiss radars. As far as the absolute calibration is concerned, the average differences were of the order of −0.6 dB (after noise subtraction). The method has been implemented on a mobile, X-band radar, and this paper presents the successful results that were obtained during the 2016 field campaign in Payerne (Switzerland). Despite a relatively poor Sun-to-Noise ratio, the “small” (~0.4 dB) amplitude of the slowly varying emission was captured and reproduced; the standard deviation of the difference between the radar and the reference was ~0.2 dB. The absolute calibration of the vertical and horizontal receivers was satisfactory. After the noise subtraction and atmospheric correction a, the mean difference was close to 0 dB. PMID:28531164

  17. Dual-Polarization Observations of Slowly Varying Solar Emissions from a Mobile X-Band Radar.

    PubMed

    Gabella, Marco; Leuenberger, Andreas

    2017-05-22

    The radio noise that comes from the Sun has been reported in literature as a reference signal to check the quality of dual-polarization weather radar receivers for the S-band and C-band. In most cases, the focus was on relative calibration: horizontal and vertical polarizations were evaluated versus the reference signal mainly in terms of standard deviation of the difference. This means that the investigated radar receivers were able to reproduce the slowly varying component of the microwave signal emitted by the Sun. A novel method, aimed at the absolute calibration of dual-polarization receivers, has recently been presented and applied for the C-band. This method requires the antenna beam axis to be pointed towards the center of the Sun for less than a minute. Standard deviations of the difference as low as 0.1 dB have been found for the Swiss radars. As far as the absolute calibration is concerned, the average differences were of the order of -0.6 dB (after noise subtraction). The method has been implemented on a mobile, X-band radar, and this paper presents the successful results that were obtained during the 2016 field campaign in Payerne (Switzerland). Despite a relatively poor Sun-to-Noise ratio, the "small" (~0.4 dB) amplitude of the slowly varying emission was captured and reproduced; the standard deviation of the difference between the radar and the reference was ~0.2 dB. The absolute calibration of the vertical and horizontal receivers was satisfactory. After the noise subtraction and atmospheric correction a, the mean difference was close to 0 dB.

  18. Solvation free energies and partition coefficients with the coarse-grained and hybrid all-atom/coarse-grained MARTINI models.

    PubMed

    Genheden, Samuel

    2017-10-01

    We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.

  19. Solvation free energies and partition coefficients with the coarse-grained and hybrid all-atom/coarse-grained MARTINI models

    NASA Astrophysics Data System (ADS)

    Genheden, Samuel

    2017-10-01

    We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.

  20. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  1. Analytical gradients for subsystem density functional theory within the slater-function-based amsterdam density functional program.

    PubMed

    Schlüns, Danny; Franchini, Mirko; Götz, Andreas W; Neugebauer, Johannes; Jacob, Christoph R; Visscher, Lucas

    2017-02-05

    We present a new implementation of analytical gradients for subsystem density-functional theory (sDFT) and frozen-density embedding (FDE) into the Amsterdam Density Functional program (ADF). The underlying theory and necessary expressions for the implementation are derived and discussed in detail for various FDE and sDFT setups. The parallel implementation is numerically verified and geometry optimizations with different functional combinations (LDA/TF and PW91/PW91K) are conducted and compared to reference data. Our results confirm that sDFT-LDA/TF yields good equilibrium distances for the systems studied here (mean absolute deviation: 0.09 Å) compared to reference wave-function theory results. However, sDFT-PW91/PW91k quite consistently yields smaller equilibrium distances (mean absolute deviation: 0.23 Å). The flexibility of our new implementation is demonstrated for an HCN-trimer test system, for which several different setups are applied. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.

    PubMed

    Jorge, Marco G; Brennand, Tracy A

    2017-01-01

    Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.

  3. Volume Quantification of Acute Infratentorial Hemorrhage with Computed Tomography: Validation of the Formula 1/2ABC and 2/3SH

    PubMed Central

    Zhang, Yunyun; Yan, Jing; Fu, Yi; Chen, Shengdi

    2013-01-01

    Objective To compare the accuracy of formula 1/2ABC with 2/3SH on volume estimation for hypertensive infratentorial hematoma. Methods One hundred and forty-seven CT scans diagnosed as hypertensive infratentorial hemorrhage were reviewed. Based on the shape, hematomas were categorized as regular or irregular. Multilobular was defined as a special shape of irregular. Hematoma volume was calculated employing computer-assisted volumetric analysis (CAVA), 1/2ABC and 2/3SH, respectively. Results The correlation coefficients between 1/2ABC (or 2/3SH) and CAVA were greater than 0.900 in all subgroups. There were neither significant differences in absolute values of volume deviation nor percentage deviation between 1/2ABC and 2/3SH for regular hemorrhage (P>0.05). While for cerebellar, brainstem and irregular hemorrhages, the absolute values of volume deviation and percentage deviation by formula 1/2ABC were greater than 2/3SH (P<0.05). 1/2ABC and 2/3SH underestimated hematoma volume each by 10% and 5% for cerebellar hemorrhage, 14% and 9% for brainstem hemorrhage, 19% and 16% for regular hemorrhage, 9% and 3% for irregular hemorrhage, respectively. In addition, for the multilobular hemorrhage, 1/2ABC underestimated the volume by 9% while 2/3SH overestimated it by 2%. Conclusions For regular hemorrhage volume calculation, the accuracy of 2/3SH is similar to 1/2ABC. While for cerebellar, brainstem or irregular hemorrhages (including multilobular), 2/3SH is more accurate than 1/2ABC. PMID:23638025

  4. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines

    PubMed Central

    2014-01-01

    Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231

  5. SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines.

    PubMed

    Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin

    2014-04-28

    It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.

  6. Validation of ozone intensities at 10 μm with THz spectrometry

    NASA Astrophysics Data System (ADS)

    Drouin, Brian J.; Crawford, Timothy J.; Yu, Shanshan

    2017-12-01

    This manuscript reports an effort to improve the absolute accuracy of ozone intensities in the 10 μm region via a transfer of the precision of the rotational dipole moment onto the infrared measurement. The approach determines the ozone mixing ratio through alternately measuring seven pure rotation ozone lines from 692 to 779 GHz. A multispectrum fitting technique was employed. The results determine the column with absolute accuracy of 1.5% and the intensities of infrared transitions measured at this accuracy reproduce the recommended values to within a standard deviation of 2.8%.

  7. Alpha-actinin-3 (ACTN3) R577X polymorphism influences knee extensor peak power response to strength training in older men and women.

    PubMed

    Delmonico, Matthew J; Kostek, Matthew C; Doldo, Neil A; Hand, Brian D; Walsh, Sean; Conway, Joan M; Carignan, Craig R; Roth, Stephen M; Hurley, Ben F

    2007-02-01

    The alpha-actinin-3 (ACTN3) R577X polymorphism has been associated with muscle power performance in cross-sectional studies. We examined baseline knee extensor concentric peak power (PP) and PP change with approximately 10 weeks of unilateral knee extensor strength training (ST) using air-powered resistance machines in 71 older men (65 [standard deviation = 8] years) and 86 older women (64 [standard deviation = 9] years). At baseline in women, the XX genotype group had an absolute (same resistance) PP that was higher than the RR (p =.005) and RX genotype groups (p =.02). The women XX group also had a relative (70% of one-repetition maximum [1-RM]) PP that was higher than that in the RR (p =.002) and RX groups (p =.008). No differences in baseline absolute or relative PP were observed between ACTN3 genotype groups in men. In men, absolute PP change with ST in the RR (n = 16) group approached a significantly higher value than in the XX group (n = 9; p =.07). In women, relative PP change with ST in the RR group (n = 16) was higher than in the XX group (n = 17; p =.02). The results indicate that the ACTN3 R577X polymorphism influences the response of quadriceps muscle power to ST in older adults.

  8. Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan

    2018-04-01

    This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.

  9. Small-Volume Injections: Evaluation of Volume Administration Deviation From Intended Injection Volumes.

    PubMed

    Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B

    2017-10-01

    In the perioperative period, anesthesiologists and postanesthesia care unit (PACU) nurses routinely prepare and administer small-volume IV injections, yet the accuracy of delivered medication volumes in this setting has not been described. In this ex vivo study, we sought to characterize the degree to which small-volume injections (≤0.5 mL) deviated from the intended injection volumes among a group of pediatric anesthesiologists and pediatric postanesthesia care unit (PACU) nurses. We hypothesized that as the intended injection volumes decreased, the deviation from those intended injection volumes would increase. Ten attending pediatric anesthesiologists and 10 pediatric PACU nurses each performed a series of 10 injections into a simulated patient IV setup. Practitioners used separate 1-mL tuberculin syringes with removable 18-gauge needles (Becton-Dickinson & Company, Franklin Lakes, NJ) to aspirate 5 different volumes (0.025, 0.05, 0.1, 0.25, and 0.5 mL) of 0.25 mM Lucifer Yellow (LY) fluorescent dye constituted in saline (Sigma Aldrich, St. Louis, MO) from a rubber-stoppered vial. Each participant then injected the specified volume of LY fluorescent dye via a 3-way stopcock into IV tubing with free-flowing 0.9% sodium chloride (10 mL/min). The injected volume of LY fluorescent dye and 0.9% sodium chloride then drained into a collection vial for laboratory analysis. Microplate fluorescence wavelength detection (Infinite M1000; Tecan, Mannedorf, Switzerland) was used to measure the fluorescence of the collected fluid. Administered injection volumes were calculated based on the fluorescence of the collected fluid using a calibration curve of known LY volumes and associated fluorescence.To determine whether deviation of the administered volumes from the intended injection volumes increased at lower injection volumes, we compared the proportional injection volume error (loge [administered volume/intended volume]) for each of the 5 injection volumes using a linear regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P < .0018). The exploratory analysis revealed no significant difference in the standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.

  10. Computational tests of quantum chemical models for excited and ionized states of molecules with phosphorus and sulfur atoms.

    PubMed

    Hahn, David K; RaghuVeer, Krishans; Ortiz, J V

    2014-05-15

    Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.

  11. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  12. ESTIMATION OF RADIOACTIVE CALCIUM-45 BY LIQUID SCINTILLATION COUNTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutwak, L.

    1959-03-01

    A liquid sclntillation counting method is developed for determining radioactive calcium-45 in biological materials. The calcium-45 is extracted, concentrated, and dissolved in absolute ethyl alcohol to which is added 0.4% diphenyloxazol in toluene. Counting efficiency is about 65 percent with standard deviation of the J-57 engin 7.36 percent. (auth)

  13. A Robust Interpretation of Teaching Evaluation Ratings

    ERIC Educational Resources Information Center

    Bi, Henry H.

    2018-01-01

    There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…

  14. Effects of thermo-order-mechanical coupling on band structures in liquid crystal nematic elastomer porous phononic crystals.

    PubMed

    Yang, Shuai; Liu, Ying

    2018-08-01

    Liquid crystal nematic elastomers are one kind of smart anisotropic and viscoelastic solids simultaneously combing the properties of rubber and liquid crystals, which is thermal sensitivity. In this paper, the wave dispersion in a liquid crystal nematic elastomer porous phononic crystal subjected to an external thermal stimulus is theoretically investigated. Firstly, an energy function is proposed to determine thermo-induced deformation in NE periodic structures. Based on this function, thermo-induced band variation in liquid crystal nematic elastomer porous phononic crystals is investigated in detail. The results show that when liquid crystal elastomer changes from nematic state to isotropic state due to the variation of the temperature, the absolute band gaps at different bands are opened or closed. There exists a threshold temperature above which the absolute band gaps are opened or closed. Larger porosity benefits the opening of the absolute band gaps. The deviation of director from the structural symmetry axis is advantageous for the absolute band gap opening in nematic state whist constrains the absolute band gap opening in isotropic state. The combination effect of temperature and director orientation provides an added degree of freedom in the intelligent tuning of the absolute band gaps in phononic crystals. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Cruise Summary of WHP P6, A10, I3 and I4 Revisits in 2003

    NASA Astrophysics Data System (ADS)

    Kawano, T.; Uchida, H.; Schneider, W.; Kumamoto, Y.; Nishina, A.; Aoyama, M.; Murata, A.; Sasaki, K.; Yoshikawa, Y.; Watanabe, S.; Fukasawa, M.

    2004-12-01

    Japan Agency for Marin-Earth Science and Technology (JAMSTEC) conducted a research cruise to round in the southern hemisphere by R/V Mirai. In this presentation, we introduce an outline of the cruise and data quality obtained during the cruise. The cruise started on Aug. 3, 2003 in Brisbane, Australia and sailed eastward until it reached Fremantle, Australia on Feb. 19, 2004. It contained six legs and legs 1, 2, 4 and 5 were revisits of WOCE Hydrographic Program (WHP) sections P6W, P6E, A10 and I3/I4, respectively. The sections consisted of about 500 hydrographic stations in total. On each station, CTD profiles and up to 36 water samples by 12L Niskin-X bottles were taken from the surface to within 10 m of the bottom. Water samples were analyzed at every station for salinity, dissolved oxygen (DO), and nutrients and at alternate stations for concentration of freons, dissolved inorganic carbon (CT), total alkalinity (AT), pH, and so on. Approximately 17,000 samples were obtained for salinity. The standard seawater was measured repeatedly to estimate the uncertainty caused by the setting and stability of the salinometer. The standard deviation of 699 repeated runs of standard seawater was 0.0002 in salinity. Replicate samples, which are a pair of samples drawn from the same Niskin bottle to different sample bottles, were taken to evaluate the overall uncertainty. The standard deviation of absolute differences of 2,769 replicates was also 0.0002 in salinity. For DO, about 13,400 samples were obtained. The analysis was made by a photometric titration technique. The reproducibility estimated from the absolute standard deviation of 1,625 replicates was about 0.09 umol/kg. CTD temperature was calibrated against a deep ocean standards thermometer (SBE35) which was attached to the CTD using a polynomial expression Tcal = T - (a +b*P + c*t), where Tcal is calibrated temperature, T is CTD temperature, P is CTD pressure and t is time. Calibration coefficients, a, b and c, were determined for each station by minimizing the sum of absolute deviation from SBE35 temperature below 2,000dbar. CTD salinity and DO were fitted to values obtained by sampled water analysis using similar polynomials. These corrections yielded deviations of about 0.0002 K in temperature, 0.0003 in salinity and 0.6 umol/kg in DO. Nutrients analyses were accomplished on 16,000 samples using the reference material of nutrients in seawater (RMNS). To establish the traceability and to get higher quality data, 500 bottles of RMNS from the same lot and 150 sets of RMNSs were used. The precisions of phosphate, nitrate and silicate measurements were 0.18 %, 0.17 % and 0.16 % in terms of median of those at 493 stations, respectively. The nutrients concentrations could be expressed with uncertainties explicitly because of the repeated runs of RMNSs. All the analyses for the CO{2}-system parameters in water columns were finished onboard. Analytical precisions of CT, AT and pH were estimated to be \\sim1.0 umol/kg, \\sim2.0 umol/kg, and \\sim7*10-4 pH unit, respectively. Approximately 6,300 samples were obtained for CFC-11 and CFC-12. The concentrations were determined with an electron capture detector - gas chromatograph (ECD-GC) attached the purge and trapping system. The reproducibility estimated from the absolute standard deviation of 365 replicates was less than 1% with respect to the surface concentrations.

  16. Estimation of the lower flammability limit of organic compounds as a function of temperature.

    PubMed

    Rowley, J R; Rowley, R L; Wilding, W V

    2011-02-15

    A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.

    PubMed

    Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman

    2013-02-01

    This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.

  18. Light localization with low-contrast targets in a patient implanted with a suprachoroidal-transretinal stimulation retinal prosthesis.

    PubMed

    Endo, Takao; Fujikado, Takashi; Hirota, Masakazu; Kanda, Hiroyuki; Morimoto, Takeshi; Nishida, Kohji

    2018-04-20

    To evaluate the improvement in targeted reaching movements toward targets of various contrasts in a patient implanted with a suprachoroidal-transretinal stimulation (STS) retinal prosthesis. An STS retinal prosthesis was implanted in the right eye of a 42-year-old man with advanced Stargardt disease (visual acuity: right eye, light perception; left eye, hand motion). In localization tests during the 1-year follow-up period, the patient attempted to touch the center of a white square target (visual angle, 10°; contrast, 96, 85, or 74%) displayed at a random position on a monitor. The distance between the touched point and the center of the target (the absolute deviation) was averaged over 20 trials with the STS system on or off. With the left eye occluded, the absolute deviation was not consistently lower with the system on than off for high-contrast (96%) targets, but was consistently lower with the system on for low-contrast (74%) targets. With both eyes open, the absolute deviation was consistently lower with the system on than off for 85%-contrast targets. With the system on and 96%-contrast targets, we detected a shorter response time while covering the right eye, which was being implanted with the STS, compared to covering the left eye (2.41 ± 2.52 vs 8.45 ± 3.78 s, p < 0.01). Performance of a reaching movement improved in a patient with an STS retinal prosthesis implanted in an eye with residual natural vision. Patients with a retinal prosthesis may be able to improve their visual performance by using both artificial vision and their residual natural vision. Beginning date of the trial: Feb. 20, 2014 Date of registration: Jan. 4, 2014 Trial registration number: UMIN000012754 Registration site: UMIN Clinical Trials Registry (UMIN-CTR) http://www.umin.ac.jp/ctr/index.htm.

  19. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    PubMed

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and presented separately for clinically different categories, e.g., hypoglycemia, exercise, or night and day. © 2013 Diabetes Technology Society.

  20. The truly remarkable universality of half a standard deviation: confirmation through another look.

    PubMed

    Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W

    2004-10-01

    In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.

  1. A standardized model for predicting flap failure using indocyanine green dye

    NASA Astrophysics Data System (ADS)

    Zimmermann, Terence M.; Moore, Lindsay S.; Warram, Jason M.; Greene, Benjamin J.; Nakhmani, Arie; Korb, Melissa L.; Rosenthal, Eben L.

    2016-03-01

    Techniques that provide a non-invasive method for evaluation of intraoperative skin flap perfusion are currently available but underutilized. We hypothesize that intraoperative vascular imaging can be used to reliably assess skin flap perfusion and elucidate areas of future necrosis by means of a standardized critical perfusion threshold. Five animal groups (negative controls, n=4; positive controls, n=5; chemotherapy group, n=5; radiation group, n=5; chemoradiation group, n=5) underwent pre-flap treatments two weeks prior to undergoing random pattern dorsal fasciocutaneous flaps with a length to width ratio of 2:1 (3 x 1.5 cm). Flap perfusion was assessed via laser-assisted indocyanine green dye angiography and compared to standard clinical assessment for predictive accuracy of flap necrosis. For estimating flap-failure, clinical prediction achieved a sensitivity of 79.3% and a specificity of 90.5%. When average flap perfusion was more than three standard deviations below the average flap perfusion for the negative control group at the time of the flap procedure (144.3+/-17.05 absolute perfusion units), laser-assisted indocyanine green dye angiography achieved a sensitivity of 81.1% and a specificity of 97.3%. When absolute perfusion units were seven standard deviations below the average flap perfusion for the negative control group, specificity of necrosis prediction was 100%. Quantitative absolute perfusion units can improve specificity for intraoperative prediction of viable tissue. Using this strategy, a positive predictive threshold of flap failure can be standardized for clinical use.

  2. Absolute Parameters for the F-type Eclipsing Binary BW Aquarii

    NASA Astrophysics Data System (ADS)

    Maxted, P. F. L.

    2018-05-01

    BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.

  3. Measurement of the cosmic microwave background spectrum by the COBE FIRAS instrument

    NASA Technical Reports Server (NTRS)

    Mather, J. C.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Fixsen, D. J.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Meyer, S. S.; Noerdlinger, P. D.

    1994-01-01

    The cosmic microwave background radiation (CMBR) has a blackbody spectrum within 3.4 x 10(exp -8) ergs/sq cm/s/sr cm over the frequency range from 2 to 20/cm (5-0.5 mm). These measurements, derived from the Far-Infrared Absolute Spectrophotomer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite, imply stringent limits on energy release in the early universe after t approximately 1 year and redshift z approximately 3 x 10(exp 6). The deviations are less than 0.30% of the peak brightness, with an rms value of 0.01%, and the dimensionless cosmological distortion parameters are limited to the absolute value of y is less than 2.5 x 10(exp -5) and the absolute value of mu is less than 3.3 x 10(exp -4) (95% confidence level). The temperature of the CMBR is 2.726 +/- 0.010 K (95% confidence level systematic).

  4. A FORMULA FOR HUMAN PAROTID FLUID COLLECTED WITHOUT EXOGENEOUS STIMULATION.

    DTIC Science & Technology

    Parotid fluid was collected from 4,589 systemically healthy males between 17 and 22 years of age. Collection devices were placed with an absolute...secretion of the parotid gland. For all 4,589 subjects from the 8 experiments the mean rate of flow was 0.040 ml./minute with an average standard deviation of

  5. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  6. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  7. Clinical and Biological Insights from the University of California San Francisco Prospective and Longitudinal Cohort.

    PubMed

    Benn, Bryan S; Lehman, Zoe; Kidd, Sharon A; Ho, Melissa; Sun, Sara; Ramstein, Joris; Arger, Nicholas K; Nguyen, Christine P; Su, Robert; Gomez, Antonio; Gelfand, Jeffrey M; Koth, Laura L

    2017-10-01

    Sarcoidosis is a systemic inflammatory disease characterized by non-necrotizing granulomas in involved organs, most commonly the lung. Description of patient characteristics in the Western United States is limited. Furthermore, blood-based measures that relate to clinical sarcoidosis phenotypes are lacking. We present an analysis of a prospective, longitudinal sarcoidosis cohort at a Northern Californian academic medical center. We enrolled 126 sarcoidosis subjects and 64 healthy controls and recorded baseline demographic and clinical characteristics. We used regression models to identify factors independently associated with pulmonary physiology. We tested whether blood transcript levels at study entry could relate to longitudinal changes in pulmonary physiology. White, non-Hispanics composed ~70% of subjects. Hispanics and Blacks had a diagnostic biopsy at an age ~7 years younger than whites. Obstructive, but not restrictive, physiology characterized Scadding Stage IV patients. Subjects reporting use of immunosuppression had worse FEV1%p, FVC%p, and DLCO%p compared to subjects never treated, regardless of Scadding stage. We defined sarcoidosis disease activity by a drop in pulmonary function over 36 months and found that subjects meeting this definition had significant repression of blood gene transcripts related to T cell receptor signaling pathways, referred to as the "TCR factor." Obstructive pulmonary physiology defined Stage IV patients which were mostly white, non-Hispanics. Genes comprising the composite gene expression score, TCR factor, may represent a blood-derived measure of T-cell activity and an indirect measure of active sarcoidosis inflammation. Validation of this measure could translate into individualized treatment for sarcoidosis patients.

  8. Quantitative determination of fatty acids in marine fish and shellfish from warm water of Straits of Malacca for nutraceutical purposes.

    PubMed

    Abd Aziz, Nurnadia; Azlan, Azrina; Ismail, Amin; Mohd Alinafiah, Suryati; Razman, Muhammad Rizal

    2013-01-01

    This study was conducted to quantitatively determine the fatty acid contents of 20 species of marine fish and four species of shellfish from Straits of Malacca. Most samples contained fairly high amounts of polyunsaturated fatty acids (PUFAs), especially alpha-linolenic acid (ALA, C18:3 n3), eicosapentaenoic acid (EPA, C20:5 n3), and docosahexaenoic acid (DHA, C22:6 n3). Longtail shad, yellowstripe scad, and moonfish contained significantly higher (P < 0.05) amounts of eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), and alpha-linolenic acid (ALA), respectively. Meanwhile, fringescale sardinella, malabar red snapper, black pomfret, Japanese threadfin bream, giant seaperch, and sixbar grouper showed considerably high content (537.2-944.1 mg/100 g wet sample) of desirable omega-3 fatty acids. The polyunsaturated-fatty-acids/saturated-fatty-acids (P/S) ratios for most samples were higher than that of Menhaden oil (P/S = 0.58), a recommended PUFA supplement which may help to lower blood pressure. Yellowstripe scad (highest DHA, ω - 3/ω - 6 = 6.4, P/S = 1.7), moonfish (highest ALA, ω - 3/ω - 6 = 1.9, P/S = 1.0), and longtail shad (highest EPA, ω - 3/ω - 6 = 0.8, P/S = 0.4) were the samples with an outstandingly desirable overall composition of fatty acids. Overall, the marine fish and shellfish from the area contained good composition of fatty acids which offer health benefits and may be used for nutraceutical purposes in the future.

  9. Quantitative Determination of Fatty Acids in Marine Fish and Shellfish from Warm Water of Straits of Malacca for Nutraceutical Purposes

    PubMed Central

    Abd Aziz, Nurnadia; Azlan, Azrina; Ismail, Amin; Mohd Alinafiah, Suryati; Razman, Muhammad Rizal

    2013-01-01

    This study was conducted to quantitatively determine the fatty acid contents of 20 species of marine fish and four species of shellfish from Straits of Malacca. Most samples contained fairly high amounts of polyunsaturated fatty acids (PUFAs), especially alpha-linolenic acid (ALA, C18:3 n3), eicosapentaenoic acid (EPA, C20:5 n3), and docosahexaenoic acid (DHA, C22:6 n3). Longtail shad, yellowstripe scad, and moonfish contained significantly higher (P < 0.05) amounts of eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), and alpha-linolenic acid (ALA), respectively. Meanwhile, fringescale sardinella, malabar red snapper, black pomfret, Japanese threadfin bream, giant seaperch, and sixbar grouper showed considerably high content (537.2–944.1 mg/100g wet sample) of desirable omega-3 fatty acids. The polyunsaturated-fatty-acids/saturated-fatty-acids (P/S) ratios for most samples were higher than that of Menhaden oil (P/S = 0.58), a recommended PUFA supplement which may help to lower blood pressure. Yellowstripe scad (highest DHA, ω − 3/ω − 6 = 6.4, P/S = 1.7), moonfish (highest ALA, ω − 3/ω − 6 = 1.9, P/S = 1.0), and longtail shad (highest EPA, ω − 3/ω − 6 = 0.8, P/S = 0.4) were the samples with an outstandingly desirable overall composition of fatty acids. Overall, the marine fish and shellfish from the area contained good composition of fatty acids which offer health benefits and may be used for nutraceutical purposes in the future. PMID:23509703

  10. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  11. New calibrators for the Cepheid period-luminosity relation

    NASA Technical Reports Server (NTRS)

    Evans, Nancy R.

    1992-01-01

    IUE spectra of six Cepheids have been used to determine their absolute magnitudes from the spectral types of their binary companions. The stars observed are U Aql, V659 Cen, Y Lac, S Nor, V350 Sgr, and V636 Sco. The absolute magnitude for V659 Cen is more uncertain than for the others because its reddening is poorly determined and the spectral type is hotter than those of the others. In addition, a reddening law with extra absorption in the 2200 A region is necessary, although this has a negligible effect on the absolute magnitude. For the other Cepheids, and also Eta Aql and W Sgr, the standard deviation from the Feast and Walker period-luminosity-color (PLC) relation is 0.37 mag, confirming the previously estimated internal uncertainty. The absolute magnitudes for S Nor from the binary companion and from cluster membership are very similar. The preliminary PLC zero point is less than 2 sigma (+0.21 mag) different from that of Feast and Walker. The same narrowing of the instability strip at low luminosities found by Fernie is seen.

  12. Role of dispersion corrected hybrid GGA class in accurately calculating the bond dissociation energy of carbon halogen bond: A benchmark study

    NASA Astrophysics Data System (ADS)

    Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid

    2017-12-01

    Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.

  13. Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA

    NASA Astrophysics Data System (ADS)

    Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.

    2018-04-01

    This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.

  14. MetaMQAP: a meta-server for the quality assessment of protein models.

    PubMed

    Pawlowski, Marcin; Gajda, Michal J; Matlak, Ryszard; Bujnicki, Janusz M

    2008-09-29

    Computational models of protein structure are usually inaccurate and exhibit significant deviations from the true structure. The utility of models depends on the degree of these deviations. A number of predictive methods have been developed to discriminate between the globally incorrect and approximately correct models. However, only a few methods predict correctness of different parts of computational models. Several Model Quality Assessment Programs (MQAPs) have been developed to detect local inaccuracies in unrefined crystallographic models, but it is not known if they are useful for computational models, which usually exhibit different and much more severe errors. The ability to identify local errors in models was tested for eight MQAPs: VERIFY3D, PROSA, BALA, ANOLEA, PROVE, TUNE, REFINER, PROQRES on 8251 models from the CASP-5 and CASP-6 experiments, by calculating the Spearman's rank correlation coefficients between per-residue scores of these methods and local deviations between C-alpha atoms in the models vs. experimental structures. As a reference, we calculated the value of correlation between the local deviations and trivial features that can be calculated for each residue directly from the models, i.e. solvent accessibility, depth in the structure, and the number of local and non-local neighbours. We found that absolute correlations of scores returned by the MQAPs and local deviations were poor for all methods. In addition, scores of PROQRES and several other MQAPs strongly correlate with 'trivial' features. Therefore, we developed MetaMQAP, a meta-predictor based on a multivariate regression model, which uses scores of the above-mentioned methods, but in which trivial parameters are controlled. MetaMQAP predicts the absolute deviation (in Angströms) of individual C-alpha atoms between the model and the unknown true structure as well as global deviations (expressed as root mean square deviation and GDT_TS scores). Local model accuracy predicted by MetaMQAP shows an impressive correlation coefficient of 0.7 with true deviations from native structures, a significant improvement over all constituent primary MQAP scores. The global MetaMQAP score is correlated with model GDT_TS on the level of 0.89. Finally, we compared our method with the MQAPs that scored best in the 7th edition of CASP, using CASP7 server models (not included in the MetaMQAP training set) as the test data. In our benchmark, MetaMQAP is outperformed only by PCONS6 and method QA_556 - methods that require comparison of multiple alternative models and score each of them depending on its similarity to other models. MetaMQAP is however the best among methods capable of evaluating just single models. We implemented the MetaMQAP as a web server available for free use by all academic users at the URL https://genesilico.pl/toolkit/

  15. Diffuse ultraviolet erythemal irradiance on inclined planes: a comparison of experimental and modeled data.

    PubMed

    Utrillas, María P; Marín, María J; Esteve, Anna R; Estellés, Victor; Tena, Fernando; Cañada, Javier; Martínez-Lozano, José A

    2009-01-01

    Values of measured and modeled diffuse UV erythemal irradiance (UVER) for all sky conditions are compared on planes inclined at 40 degrees and oriented north, south, east and west. The models used for simulating diffuse UVER are of the geometric-type, mainly the Isotropic, Klucher, Hay, Muneer, Reindl and Schauberger models. To analyze the precision of the models, some statistical estimators were used such as root mean square deviation, mean absolute deviation and mean bias deviation. It was seen that all the analyzed models reproduce adequately the diffuse UVER on the south-facing plane, with greater discrepancies for the other inclined planes. When the models are applied to cloud-free conditions, the errors obtained are higher because the anisotropy of the sky dome acquires more importance and the models do not provide the estimation of diffuse UVER accurately.

  16. [Influence of human personal features on acoustic correlates of speech emotional intonation characteristics].

    PubMed

    Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M

    2009-01-01

    Comparative study of acoustic correlates of emotional intonation was conducted on two types of speech material: sensible speech utterances and short meaningless words. The corpus of speech signals of different emotional intonations (happy, angry, frightened, sad and neutral) was created using the actor's method of simulation of emotions. Native Russian 20-70-year-old speakers (both professional actors and non-actors) participated in the study. In the corpus, the following characteristics were analyzed: mean values and standard deviations of the power, fundamental frequency, frequencies of the first and second formants, and utterance duration. Comparison of each emotional intonation with "neutral" utterances showed the greatest deviations of the fundamental frequency and frequencies of the first formant. The direction of these deviations was independent of the semantic content of speech utterance and its duration, age, gender, and being actor or non-actor, though the personal features of the speakers affected the absolute values of these frequencies.

  17. Vapor-liquid equilibria for an R134a/lubricant mixture: Measurements and equation-of-state modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, M.L.; Holcomb, C.D.; Outcalt, S.L.

    2000-07-01

    The authors measured bubble point pressures and coexisting liquid densities for two mixtures of R-134a and a polyolester (POE) lubricant. The mass fraction of the lubricant was approximately 9% and 12%, and the temperature ranged from 280 K to 355 K. The authors used the Elliott, Suresh, and Donohue (ESD) equation of state to model the bubble point pressure data. The bubble point pressures were represented with an average absolute deviation of 2.5%. A binary interaction parameter reduced the deviation to 1.4%. The authors also applied the ESD model to other R-134a/POE lubricant data in the literature. As the concentrationmore » of the lubricant increased, the performance of the model deteriorated markedly. However, the use of a single binary interaction parameter reduced the deviations significantly.« less

  18. A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center

    DTIC Science & Technology

    2009-06-01

    use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for

  19. Observations on the method of determining the velocity of airships

    NASA Technical Reports Server (NTRS)

    Volterra, Vito

    1921-01-01

    To obtain the absolute velocity of an airship by knowing the speed at which two routes are covered, we have only to determine the geographical direction of the routes which we locate from a map, and the angles of routes as given by the compass, after correcting for the variation (the algebraical sum of the local magnetic declination and the deviation).

  20. File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering

    DTIC Science & Technology

    2013-03-21

    33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change

  1. Development of a Dual-Pump CARS System for Measurements in a Supersonic Combusting Free Jet

    NASA Technical Reports Server (NTRS)

    Magnotti, Gaetano; Cutler, Andrew D.; Danehy, Paul

    2012-01-01

    This work describes the development of a dual-pump CARS system for simultaneous measurements of temperature and absolute mole fraction of N2, O2 and H2 in a laboratory scale supersonic combusting free jet. Changes to the experimental set-up and the data analysis to improve the quality of the measurements in this turbulent, high-temperature reacting flow are described. The accuracy and precision of the instrument have been determined using data collected in a Hencken burner flame. For temperature above 800 K, errors in absolute mole fraction are within 1.5, 0.5, and 1% of the total composition for N2, O2 and H2, respectively. Estimated standard deviations based on 500 single shots are between 10 and 65 K for the temperature, between 0.5 and 1.7% of the total composition for O2, and between 1.5 and 3.4% for N2. The standard deviation of H2 is 10% of the average measured mole fraction. Results obtained in the jet with and without combustion are illustrated, and the capabilities and limitations of the dual-pump CARS instrument discussed.

  2. Modelling PET radionuclide production in tissue and external targets using Geant4

    NASA Astrophysics Data System (ADS)

    Amin, T.; Infantino, A.; Lindsay, C.; Barlow, R.; Hoehr, C.

    2017-07-01

    The Proton Therapy Facility in TRIUMF provides 74 MeV protons extracted from a 500 MeV H- cyclotron for ocular melanoma treatments. During treatment, positron emitting radionuclides such as 1C, 15O and 13N are produced in patient tissue. Using PET scanners, the isotopic activity distribution can be measured for in-vivo range verification. A second cyclotron, the TR13, provides 13 MeV protons onto liquid targets for the production of PET radionuclides such as 18F, 13N or 68Ga, for medical applications. The aim of this work was to validate Geant4 against FLUKA and experimental measurements for production of the above-mentioned isotopes using the two cyclotrons. The results show variable degrees of agreement. For proton therapy, the proton-range agreement was within 2 mm for 11C activity, whereas 13N disagreed. For liquid targets at the TR13 the average absolute deviation ratio between FLUKA and experiment was 1.9±2.7, whereas the average absolute deviation ratio between Geant4 and experiment was 0. 6±0.4. This is due to the uncertainties present in experimentally determined reaction cross sections.

  3. Patient with oligodontia treated with a miniscrew for unilateral mesial movement of the maxillary molars and alignment of an impacted third molar.

    PubMed

    Maeda, Aya; Sakoguchi, Yoko; Miyawaki, Shouichi

    2013-09-01

    This report describes the treatment of a 20-year-old woman with a dental midline deviation and 7 congenitally missing premolars. She had retained a maxillary right deciduous canine and 4 deciduous second molars, and she had an impacted maxillary right third molar. The maxillary right deciduous second molar was extracted, and the space was nearly closed by mesial movement of the maxillary right molars using an edgewise appliance and a miniscrew for absolute anchorage. The miniscrew was removed, and the extraction space of the maxillary right deciduous canine was closed, correcting the dental midline deviation. After the mesial movement of the maxillary right molars, the impacted right third molar was aligned. To prevent root resorption, the retained left deciduous second molars were not aligned by the edgewise appliance. The occlusal contact area and the maximum occlusal force increased over the 2 years of retention. The miniscrew was useful for absolute anchorage for unilateral mesial movement of the maxillary molars and for the creation of eruption space and alignment of the impacted third molar in a patient with oligodontia. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  4. Laser frequency stabilization using a commercial wavelength meter

    NASA Astrophysics Data System (ADS)

    Couturier, Luc; Nosske, Ingo; Hu, Fachao; Tan, Canzhu; Qiao, Chang; Jiang, Y. H.; Chen, Peng; Weidemüller, Matthias

    2018-04-01

    We present the characterization of a laser frequency stabilization scheme using a state-of-the-art wavelength meter based on solid Fizeau interferometers. For a frequency-doubled Ti-sapphire laser operated at 461 nm, an absolute Allan deviation below 10-9 with a standard deviation of 1 MHz over 10 h is achieved. Using this laser for cooling and trapping of strontium atoms, the wavemeter scheme provides excellent stability in single-channel operation. Multi-channel operation with a multimode fiber switch results in fluctuations of the atomic fluorescence correlated to residual frequency excursions of the laser. The wavemeter-based frequency stabilization scheme can be applied to a wide range of atoms and molecules for laser spectroscopy, cooling, and trapping.

  5. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  6. Comparison of absolute gain photometric calibration between Planck/HFI and Herschel/SPIRE at 545 and 857 GHz

    NASA Astrophysics Data System (ADS)

    Bertincourt, B.; Lagache, G.; Martin, P. G.; Schulz, B.; Conversi, L.; Dassas, K.; Maurin, L.; Abergel, A.; Beelen, A.; Bernard, J.-P.; Crill, B. P.; Dole, H.; Eales, S.; Gudmundsson, J. E.; Lellouch, E.; Moreno, R.; Perdereau, O.

    2016-04-01

    We compare the absolute gain photometric calibration of the Planck/HFI and Herschel/SPIRE instruments on diffuse emission. The absolute calibration of HFI and SPIRE each relies on planet flux measurements and comparison with theoretical far-infrared emission models of planetary atmospheres. We measure the photometric cross calibration between the instruments at two overlapping bands, 545 GHz/500 μm and 857 GHz/350 μm. The SPIRE maps used have been processed in the Herschel Interactive Processing Environment (Version 12) and the HFI data are from the 2015 Public Data Release 2. For our study we used 15 large fields observed with SPIRE, which cover a total of about 120 deg2. We have selected these fields carefully to provide high signal-to-noise ratio, avoid residual systematics in the SPIRE maps, and span a wide range of surface brightness. The HFI maps are bandpass-corrected to match the emission observed by the SPIRE bandpasses. The SPIRE maps are convolved to match the HFI beam and put on a common pixel grid. We measure the cross-calibration relative gain between the instruments using two methods in each field, pixel-to-pixel correlation and angular power spectrum measurements. The SPIRE/HFI relative gains are 1.047 (±0.0069) and 1.003 (±0.0080) at 545 and 857 GHz, respectively, indicating very good agreement between the instruments. These relative gains deviate from unity by much less than the uncertainty of the absolute extended emission calibration, which is about 6.4% and 9.5% for HFI and SPIRE, respectively, but the deviations are comparable to the values 1.4% and 5.5% for HFI and SPIRE if the uncertainty from models of the common calibrator can be discounted. Of the 5.5% uncertainty for SPIRE, 4% arises from the uncertainty of the effective beam solid angle, which impacts the adopted SPIRE point source to extended source unit conversion factor, highlighting that as a focus for refinement.

  7. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  8. Linking Comparisons of Absolute Gravimeters: A Proof of Concept for a new Global Absolute Gravity Reference System.

    NASA Astrophysics Data System (ADS)

    Wziontek, H.; Palinkas, V.; Falk, R.; Vaľko, M.

    2016-12-01

    Since decades, absolute gravimeters are compared on a regular basis on an international level, starting at the International Bureau for Weights and Measures (BIPM) in 1981. Usually, these comparisons are based on constant reference values deduced from all accepted measurements acquired during the comparison period. Temporal changes between comparison epochs are usually not considered. Resolution No. 2, adopted by IAG during the IUGG General Assembly in Prague 2015, initiates the establishment of a Global Absolute Gravity Reference System based on key comparisons of absolute gravimeters (AG) under the International Committee for Weights and Measures (CIPM) in order to establish a common level in the microGal range. A stable and unique reference frame can only be achieved, if different AG are taking part in different kind of comparisons. Systematic deviations between the respective comparison reference values can be detected, if the AG can be considered stable over time. The continuous operation of superconducting gravimeters (SG) on selected stations further supports the temporal link of comparison reference values by establishing a reference function over time. By a homogenous reprocessing of different comparison epochs and including AG and SG time series at selected stations, links between several comparisons will be established and temporal comparison reference functions will be derived. By this, comparisons on a regional level can be traced to back to the level of key comparisons, providing a reference for other absolute gravimeters. It will be proved and discussed, how such a concept can be used to support the future absolute gravity reference system.

  9. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  10. Regional comparison of absolute gravimeters SIM.M.G-K1 key comparison

    NASA Astrophysics Data System (ADS)

    Newell, D. B.; van Westrum, D.; Francis, O.; Kanney, J.; Liard, J.; Ramirez, A. E.; Lucero, B.; Ellis, B.; Greco, F.; Pistorio, A.; Reudink, R.; Iacovone, D.; Baccaro, F.; Silliker, J.; Wheeler, R. D.; Falk, R.; Ruelke, A.

    2017-01-01

    Twelve absolute gravimeters were compared during the regional Key Comparison SIM.M.G-K1 of absolute gravimeters. The four gravimeters were from different NMIs and DIs. The comparison was linked to the CCM.G-K2 through EURAMET.M.G-K2 via the DI gravimeter FG5X-216. Overall, the results and uncertainties indicate an excellent agreement among the gravimeters, with a standard deviation of the gravimeters' DoEs better than 1.3 μGal. In the case of the official solution, all the gravimeters are in equivalence well within the declared uncertainties. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  11. Absolute distance measurement with correction of air refractive index by using two-color dispersive interferometry.

    PubMed

    Wu, Hanzhong; Zhang, Fumin; Liu, Tingyang; Li, Jianshuang; Qu, Xinghua

    2016-10-17

    Two-color interferometry is powerful for the correction of the air refractive index especially in the turbulent air over long distance, since the empirical equations could introduce considerable measurement uncertainty if the environmental parameters cannot be measured with sufficient precision. In this paper, we demonstrate a method for absolute distance measurement with high-accuracy correction of air refractive index using two-color dispersive interferometry. The distances corresponding to the two wavelengths can be measured via the spectrograms captured by a CCD camera pair in real time. In the long-term experiment of the correction of air refractive index, the experimental results show a standard deviation of 3.3 × 10-8 for 12-h continuous measurement without the precise knowledge of the environmental conditions, while the variation of the air refractive index is about 2 × 10-6. In the case of absolute distance measurement, the comparison with the fringe counting interferometer shows an agreement within 2.5 μm in 12 m range.

  12. 3D shape measurements with a single interferometric sensor for in-situ lathe monitoring

    NASA Astrophysics Data System (ADS)

    Kuschmierz, R.; Huang, Y.; Czarske, J.; Metschke, S.; Löffler, F.; Fischer, A.

    2015-05-01

    Temperature drifts, tool deterioration, unknown vibrations as well as spindle play are major effects which decrease the achievable precision of computerized numerically controlled (CNC) lathes and lead to shape deviations between the processed work pieces. Since currently no measurement system exist for fast, precise and in-situ 3d shape monitoring with keyhole access, much effort has to be made to simulate and compensate these effects. Therefore we introduce an optical interferometric sensor for absolute 3d shape measurements, which was integrated into a working lathe. According to the spindle rotational speed, a measurement rate of 2,500 Hz was achieved. In-situ absolute shape, surface profile and vibration measurements are presented. While thermal drifts of the sensor led to errors of several mµm for the absolute shape, reference measurements with a coordinate machine show, that the surface profile could be measured with an uncertainty below one micron. Additionally, the spindle play of 0.8 µm was measured with the sensor.

  13. Rectangularization of the survival curve in The Netherlands, 1950-1992.

    PubMed

    Nusselder, W J; Mackenbach, J P

    1996-12-01

    In this article we determine whether rectangularization of the survival curve occurred in the Netherlands in the period 1950-1992. Rectangularization is defined as a trend toward a more rectangular shape of the survival curve due to increased survival and concentration of deaths around the mean age at death. We distinguish between absolute and relative rectangularization, depending on whether an increase in life expectancy is accompanied by concentration of deaths into a smaller age interval or into a smaller proportion of total life expectancy. We used measures of variability based on Keyfitz' H and the standard deviation, both life table-based. Our results show that absolute and relative rectangularization of the entire survival curve occurred in both sexes and over the complete period (except for the years 1955-1959 and 1965-1969 in men). At older ages, results differ between sexes, periods, and an absolute versus a relative definition of rectangularization. Above age 60 1/2, relative rectangularization occurred in women over the complete period and in men since 1975-1979 only, whereas absolute rectangularization occurred in both sexes since the period of 1980-1984. The implications of the recent rectangularization at older ages for achieving compression of morbidity are discussed.

  14. Mammographic Density Phenotypes and Risk of Breast Cancer: A Meta-analysis

    PubMed Central

    Graff, Rebecca E.; Ursin, Giske; dos Santos Silva, Isabel; McCormack, Valerie; Baglietto, Laura; Vachon, Celine; Bakker, Marije F.; Giles, Graham G.; Chia, Kee Seng; Czene, Kamila; Eriksson, Louise; Hall, Per; Hartman, Mikael; Warren, Ruth M. L.; Hislop, Greg; Chiarelli, Anna M.; Hopper, John L.; Krishnan, Kavitha; Li, Jingmei; Li, Qing; Pagano, Ian; Rosner, Bernard A.; Wong, Chia Siong; Scott, Christopher; Stone, Jennifer; Maskarinec, Gertraud; Boyd, Norman F.; van Gils, Carla H.

    2014-01-01

    Background Fibroglandular breast tissue appears dense on mammogram, whereas fat appears nondense. It is unclear whether absolute or percentage dense area more strongly predicts breast cancer risk and whether absolute nondense area is independently associated with risk. Methods We conducted a meta-analysis of 13 case–control studies providing results from logistic regressions for associations between one standard deviation (SD) increments in mammographic density phenotypes and breast cancer risk. We used random-effects models to calculate pooled odds ratios and 95% confidence intervals (CIs). All tests were two-sided with P less than .05 considered to be statistically significant. Results Among premenopausal women (n = 1776 case patients; n = 2834 control subjects), summary odds ratios were 1.37 (95% CI = 1.29 to 1.47) for absolute dense area, 0.78 (95% CI = 0.71 to 0.86) for absolute nondense area, and 1.52 (95% CI = 1.39 to 1.66) for percentage dense area when pooling estimates adjusted for age, body mass index, and parity. Corresponding odds ratios among postmenopausal women (n = 6643 case patients; n = 11187 control subjects) were 1.38 (95% CI = 1.31 to 1.44), 0.79 (95% CI = 0.73 to 0.85), and 1.53 (95% CI = 1.44 to 1.64). After additional adjustment for absolute dense area, associations between absolute nondense area and breast cancer became attenuated or null in several studies and summary odds ratios became 0.82 (95% CI = 0.71 to 0.94; P heterogeneity = .02) for premenopausal and 0.85 (95% CI = 0.75 to 0.96; P heterogeneity < .01) for postmenopausal women. Conclusions The results suggest that percentage dense area is a stronger breast cancer risk factor than absolute dense area. Absolute nondense area was inversely associated with breast cancer risk, but it is unclear whether the association is independent of absolute dense area. PMID:24816206

  15. Directly relating gas-phase cluster measurements to solution-phase hydrolysis, the absolute standard hydrogen electrode potential, and the absolute proton solvation energy.

    PubMed

    Donald, William A; Leib, Ryan D; O'Brien, Jeremy T; Williams, Evan R

    2009-06-08

    Solution-phase, half-cell potentials are measured relative to other half-cell potentials, resulting in a thermochemical ladder that is anchored to the standard hydrogen electrode (SHE), which is assigned an arbitrary value of 0 V. A new method for measuring the absolute SHE potential is demonstrated in which gaseous nanodrops containing divalent alkaline-earth or transition-metal ions are reduced by thermally generated electrons. Energies for the reactions 1) M(H(2)O)(24)(2+)(g) + e(-)(g)-->M(H(2)O)(24)(+)(g) and 2) M(H(2)O)(24)(2+)(g) + e(-)(g)-->MOH(H(2)O)(23)(+)(g) + H(g) and the hydrogen atom affinities of MOH(H(2)O)(23)(+)(g) are obtained from the number of water molecules lost through each pathway. From these measurements on clusters containing nine different metal ions and known thermochemical values that include solution hydrolysis energies, an average absolute SHE potential of +4.29 V vs. e(-)(g) (standard deviation of 0.02 V) and a real proton solvation free energy of -265 kcal mol(-1) are obtained. With this method, the absolute SHE potential can be obtained from a one-electron reduction of nanodrops containing divalent ions that are not observed to undergo one-electron reduction in aqueous solution.

  16. Directly Relating Gas-Phase Cluster Measurements to Solution-Phase Hydrolysis, the Absolute Standard Hydrogen Electrode Potential, and the Absolute Proton Solvation Energy

    PubMed Central

    Donald, William A.; Leib, Ryan D.; O’Brien, Jeremy T.; Williams, Evan R.

    2009-01-01

    Solution-phase, half-cell potentials are measured relative to other half-cell potentials, resulting in a thermochemical ladder that is anchored to the standard hydrogen electrode (SHE), which is assigned an arbitrary value of 0 V. A new method for measuring the absolute SHE potential is demonstrated in which gaseous nanodrops containing divalent alkaline-earth or transition-metal ions are reduced by thermally generated electrons. Energies for the reactions 1) M-(H2O)242+(g)+e−(g)→M(H2O)24+(g) and 2) M(H2O)242+(g)+e−(g)→MOH(H2O)23+(g)+H(g) and the hydrogen atom affinities of MOH(H2O)23+(g) are obtained from the number of water molecules lost through each pathway. From these measurements on clusters containing nine different metal ions and known thermochemical values that include solution hydrolysis energies, an average absolute SHE potential of +4.29 V vs. e−(g) (standard deviation of 0.02 V) and a real proton solvation free energy of −265 kcal mol−1 are obtained. With this method, the absolute SHE potential can be obtained from a one-electron reduction of nanodrops containing divalent ions that are not observed to undergo one-electron reduction in aqueous solution. PMID:19440999

  17. Intensity stabilisation of optical pulse sequences for coherent control of laser-driven qubits

    NASA Astrophysics Data System (ADS)

    Thom, Joseph; Yuen, Ben; Wilpers, Guido; Riis, Erling; Sinclair, Alastair G.

    2018-05-01

    We demonstrate a system for intensity stabilisation of optical pulse sequences used in laser-driven quantum control of trapped ions. Intensity instability is minimised by active stabilisation of the power (over a dynamic range of > 104) and position of the focused beam at the ion. The fractional Allan deviations in power were found to be <2.2 × 10^{-4} for averaging times from 1 to 16,384 s. Over similar times, the absolute Allan deviation of the beam position is <0.1 μm for a 45 {μ }m beam diameter. Using these residual power and position instabilities, we estimate the associated contributions to infidelity in example qubit logic gates to be below 10^{-6} per gate.

  18. Phytoremediation of palm oil mill secondary effluent (POMSE) by Chrysopogon zizanioides (L.) using artificial neural networks.

    PubMed

    Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin

    2017-05-04

    Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.

  19. Who's biased? A meta-analysis of buyer-seller differences in the pricing of lotteries.

    PubMed

    Yechiam, Eldad; Ashby, Nathaniel J S; Pachur, Thorsten

    2017-05-01

    A large body of empirical research has examined the impact of trading perspective on pricing of consumer products, with the typical finding being that selling prices exceed buying prices (i.e., the endowment effect). Using a meta-analytic approach, we examine to what extent the endowment effect also emerges in the pricing of monetary lotteries. As monetary lotteries have a clearly defined normative value, we also assess whether one trading perspective is more biased than the other. We consider several indicators of bias: absolute deviation from expected values, rank correlation with expected values, overall variance, and per-unit variance. The meta-analysis, which includes 35 articles, indicates that selling prices considerably exceed buying prices (Cohen's d = 0.58). Importantly, we also find that selling prices deviate less from the lotteries' expected values than buying prices, both in absolute and in relative terms. Selling prices also exhibit lower variance per unit. Hierarchical Bayesian modeling with cumulative prospect theory indicates that buyers have lower probability sensitivity and a more pronounced response bias. The finding that selling prices are more in line with normative standards than buying prices challenges the prominent account whereby sellers' valuations are upward biased due to loss aversion, and supports alternative theoretical accounts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Inter-laboratory validation of bioaccessibility testing for metals.

    PubMed

    Henderson, Rayetta G; Verougstraete, Violaine; Anderson, Kim; Arbildua, José J; Brock, Thomas O; Brouwers, Tony; Cappellini, Danielle; Delbeke, Katrien; Herting, Gunilla; Hixon, Greg; Odnevall Wallinder, Inger; Rodriguez, Patricio H; Van Assche, Frank; Wilrich, Peter; Oller, Adriana R

    2014-10-01

    Bioelution assays are fast, simple alternatives to in vivo testing. In this study, the intra- and inter-laboratory variability in bioaccessibility data generated by bioelution tests were evaluated in synthetic fluids relevant to oral, inhalation, and dermal exposure. Using one defined protocol, five laboratories measured metal release from cobalt oxide, cobalt powder, copper concentrate, Inconel alloy, leaded brass alloy, and nickel sulfate hexahydrate. Standard deviations of repeatability (sr) and reproducibility (sR) were used to evaluate the intra- and inter-laboratory variability, respectively. Examination of the sR:sr ratios demonstrated that, while gastric and lysosomal fluids had reasonably good reproducibility, other fluids did not show as good concordance between laboratories. Relative standard deviation (RSD) analysis showed more favorable reproducibility outcomes for some data sets; overall results varied more between- than within-laboratories. RSD analysis of sr showed good within-laboratory variability for all conditions except some metals in interstitial fluid. In general, these findings indicate that absolute bioaccessibility results in some biological fluids may vary between different laboratories. However, for most applications, measures of relative bioaccessibility are needed, diminishing the requirement for high inter-laboratory reproducibility in absolute metal releases. The inter-laboratory exercise suggests that the degrees of freedom within the protocol need to be addressed. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. The laminar organization of the motor cortex in monodactylous mammals: a comparative assessment based on horse, chimpanzee, and macaque.

    PubMed

    Cozzi, Bruno; De Giorgio, Andrea; Peruffo, A; Montelli, S; Panin, M; Bombardi, C; Grandis, A; Pirone, A; Zambenedetti, P; Corain, L; Granato, Alberto

    2017-08-01

    The architecture of the neocortex classically consists of six layers, based on cytological criteria and on the layout of intra/interlaminar connections. Yet, the comparison of cortical cytoarchitectonic features across different species proves overwhelmingly difficult, due to the lack of a reliable model to analyze the connection patterns of neuronal ensembles forming the different layers. We first defined a set of suitable morphometric cell features, obtained in digitized Nissl-stained sections of the motor cortex of the horse, chimpanzee, and crab-eating macaque. We then modeled them using a quite general non-parametric data representation model, showing that the assessment of neuronal cell complexity (i.e., how a given cell differs from its neighbors) can be performed using a suitable measure of statistical dispersion such as the mean absolute deviation-mean absolute deviation (MAD). Along with the non-parametric combination and permutation methodology, application of MAD allowed not only to estimate, but also to compare and rank the motor cortical complexity across different species. As to the instances presented in this paper, we show that the pyramidal layers of the motor cortex of the horse are far more irregular than those of primates. This feature could be related to the different organizations of the motor system in monodactylous mammals.

  2. Isobaric vapor-liquid equilibria for binary systems α-phenylethylamine + toluene and α-phenylethylamine + cyclohexane at 100 kPa

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoru; Gao, Yingyu; Ban, Chunlan; Huang, Qiang

    2016-09-01

    In this paper the results of the vapor-liquid equilibria study at 100 kPa are presented for two binary systems: α-phenylethylamine(1) + toluene (2) and (α-phenylethylamine(1) + cyclohexane(2)). The binary VLE data of the two systems were correlated by the Wilson, NRTL, and UNIQUAC models. For each binary system the deviations between the results of the correlations and the experimental data have been calculated. For the both binary systems the average relative deviations in temperature for the three models were lower than 0.99%. The average absolute deviations in vapour phase composition (mole fractions) and in temperature T were lower than 0.0271 and 1.93 K, respectively. Thermodynamic consistency has been tested for all vapor-liquid equilibrium data by the Herrington method. The values calculated by Wilson and NRTL equations satisfied the thermodynamics consistency test for the both two systems, while the values calculated by UNIQUAC equation didn't.

  3. Allan deviation analysis of financial return series

    NASA Astrophysics Data System (ADS)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  4. Simulation and Analysis of Topographic Effect on Land Surface Albedo over Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Hao, D.; Wen, J.; Xiao, Q.

    2017-12-01

    Land surface albedo is one of the significant geophysical variables affecting the Earth's climate and controlling the surface radiation budget. Topography leads to the formation of shadows and the redistribution of incident radiation, which complicates the modeling and estimation of the land surface albedo. Some studies show that neglecting the topography effect may lead to significant bias in estimating the land surface albedo for the sloping terrain. However, for the composite sloping terrain, the topographic effects on the albedo remain unclear. Accurately estimating the sub-topographic effect on the land surface albedo over the composite sloping terrain presents a challenge for remote sensing modeling and applications. In our study, we focus on the development of a simplified estimation method for land surface albedo including black-sky albedo (BSA) and white-sky albedo (WSA) of the composite sloping terrain at a kilometer scale based on the fine scale DEM (30m) and quantitatively investigate and understand the topographic effects on the albedo. The albedo is affected by various factors such as solar zenith angle (SZA), solar azimuth angle (SAA), shadows, terrain occlusion, and slope and aspect distribution of the micro-slopes. When SZA is 30°, the absolute and relative deviations between the BSA of flat terrain and that of rugged terrain reaches 0.12 and 50%, respectively. When the mean slope of the terrain is 30.63° and SZA=30°, the absolute deviation of BSA caused by SAA can reach 0.04. The maximal relative and relative deviation between the WSA of flat terrain and that of rugged terrain reaches 0.08 and 50%. These results demonstrate that the topographic effect has to be taken into account in the albedo estimation.

  5. Gödel, Tarski, Turing and the Conundrum of Free Will

    NASA Astrophysics Data System (ADS)

    Nayakar, Chetan S. Mandayam; Srikanth, R.

    2014-07-01

    The problem of defining and locating free will (FW) in physics is studied. On the basis of logical paradoxes, we argue that FW has a metatheoretic character, like the concept of truth in Tarski's undefinability theorem. Free will exists relative to a base theory if there is freedom to deviate from the deterministic or indeterministic dynamics in the theory, with the deviations caused by parameters (representing will) in the meta-theory. By contrast, determinism and indeterminism do not require meta-theoretic considerations in their formalization, making FW a fundamentally new causal primitive. FW exists relative to the meta-theory if there is freedom for deviation, due to higher-order causes. Absolute free will, which corresponds to our intuitive introspective notion of free will, exists if this meta-theoretic hierarchy is infinite. We argue that this hierarchy corresponds to higher levels of uncomputability. In other words, at any finitely high order in the hierarchy, there are uncomputable deviations from the law at that order. Applied to the human condition, the hierarchy corresponds to deeper levels of the subconscious or unconscious mind. Possible ramifications of our model for physics, neuroscience and artificial intelligence (AI) are briefly considered.

  6. Decomposition Analyses Applied to a Complex Ultradian Biorhythm: The Oscillating NADH Oxidase Activity of Plasma Membranes Having a Potential Time-Keeping (Clock) Function

    PubMed Central

    Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James

    2003-01-01

    Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112

  7. Results of the first North American comparison of absolute gravimeters, NACAG-2010

    USGS Publications Warehouse

    Schmerge, David; Francis, Olvier; Henton, J.; Ingles, D.; Jones, D.; Kennedy, Jeffrey R.; Krauterbluth, K.; Liard, J.; Newell, D.; Sands, R.; Schiel, J.; Silliker, J.; van Westrum, D.

    2012-01-01

    The first North American Comparison of absolute gravimeters (NACAG-2010) was hosted by the National Oceanic and Atmospheric Administration at its newly renovated Table Mountain Geophysical Observatory (TMGO) north of Boulder, Colorado, in October 2010. NACAG-2010 and the renovation of TMGO are part of NGS’s GRAV-D project (Gravity for the Redefinition of the American Vertical Datum). Nine absolute gravimeters from three countries participated in the comparison. Before the comparison, the gravimeter operators agreed to a protocol describing the strategy to measure, calculate, and present the results. Nine sites were used to measure the free-fall acceleration of g. Each gravimeter measured the value of g at a subset of three of the sites, for a total set of 27 g-values for the comparison. The absolute gravimeters agree with one another with a standard deviation of 1.6 µGal (1 Gal = 1 cm s-2). The minimum and maximum offsets are -2.8 and 2.7 µGal. This is an excellent agreement and can be attributed to multiple factors, including gravimeters that were in good working order, good operators, a quiet observatory, and a short duration time for the experiment. These results can be used to standardize gravity surveys internationally.

  8. Effect of helicity on the correlation time of large scales in turbulent flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2017-11-01

    Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.

  9. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  10. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  11. High School Students' Accuracy in Estimating the Cost of College: A Proposed Methodological Approach and Differences among Racial/Ethnic Groups and College Financial-Related Factors

    ERIC Educational Resources Information Center

    Nienhusser, H. Kenny; Oshio, Toko

    2017-01-01

    High school students' accuracy in estimating the cost of college (AECC) was examined by utilizing a new methodological approach, the absolute-deviation-continuous construct. This study used the High School Longitudinal Study of 2009 (HSLS:09) data and examined 10,530 11th grade students in order to measure their AECC for 4-year public and private…

  12. [Noninvasive total hemoglobin monitoring based on multiwave spectrophotometry in obstetrics and gynecology].

    PubMed

    Pyregov, A V; Ovechkin, A Iu; Petrov, S V

    2012-01-01

    Results of prospective randomized comparative research of 2 total hemoglobin estimation methods are presented. There were laboratory tests and continuous noninvasive technique with multiwave spectrophotometry on the Masimo Rainbow SET. Research was carried out in two stages. At the 1st stage (gynecology)--67 patients were included and in second stage (obstetrics)--44 patients during and after Cesarean section. The standard deviation of noninvasive total hemoglobin estimation from absolute values (invasive) was 7.2 and 4.1%, an standard deviation in a sample--5.2 and 2.7 % in gynecologic operations and surgical delivery respectively, that confirms lack of reliable indicators differences. The method of continuous noninvasive total hemoglobin estimation with multiwave spectrophotometry on the Masimo Rainbow SET technology can be recommended for use in obstetrics and gynecology.

  13. Measuring the rate of change of voice fundamental frequency in fluent speech during mental depression.

    PubMed

    Nilsonne, A; Sundberg, J; Ternström, S; Askenfelt, A

    1988-02-01

    A method of measuring the rate of change of fundamental frequency has been developed in an effort to find acoustic voice parameters that could be useful in psychiatric research. A minicomputer program was used to extract seven parameters from the fundamental frequency contour of tape-recorded speech samples: (1) the average rate of change of the fundamental frequency and (2) its standard deviation, (3) the absolute rate of fundamental frequency change, (4) the total reading time, (5) the percent pause time of the total reading time, (6) the mean, and (7) the standard deviation of the fundamental frequency distribution. The method is demonstrated on (a) a material consisting of synthetic speech and (b) voice recordings of depressed patients who were examined during depression and after improvement.

  14. Aircraft noise-induced awakenings are more reasonably predicted from relative than from absolute sound exposure levels.

    PubMed

    Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda

    2013-11-01

    Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening.

  15. Ground State Structure Search of Fluoroperovskites through Lattice Instability

    NASA Astrophysics Data System (ADS)

    Mei, W. N.; Hatch, D. M.; Stokes, H. T.; Boyer, L. L.

    2002-03-01

    Many Fluoroperovskite are capable of a ferroelectric transition from a cubic to a tetragonal and even lower-symmetry structures. In this work, we studied systematically the structural phase transitions of several fluoroperovskites ABF3 where A= Na, K and B= Ca, Sr. Combining the Self-Consistent Atom Deformation (SCAD) -- a density-functional method using localized densities -- and the frozen-phonon method which utilizes the isotropy subgroup operations, we calculate the phonon energies and find instabilities which lower the symmetry of the crystal. Following this scheme, we work down to lower symmetry structures until we no longer find instabilities. The final results are used to compare with those obtained from molecular dynamics based on Gordon-Kim potentials.

  16. Absolute extinction and the influence of environment - Dark cloud sight lines toward VCT 10, 30, and Walker 67

    NASA Technical Reports Server (NTRS)

    Cardelli, Jason A.; Clayton, Geoffrey C.

    1991-01-01

    The range of validity of the average absolute extinction law (AAEL) proposed by Cardelli et al. (1988 and 1989) is investigated, combining published visible and NIR data with IUE UV observations for three lines of sight through dense dark cloud environments with high values of total-to-selective extinction. The characteristics of the data sets and the reduction and parameterization methods applied are described in detail, and the results are presented in extensive tables and graphs. Good agreement with the AAEL is demonstrated for wavelengths from 3.4 microns to 250 nm, but significant deviations are found at shorter wavelengths (where previous studies of lines of sight through bright nebulosity found good agreement with the AAEL). These differences are attributed to the effects of coatings on small-bump and FUV grains.

  17. Dust-Corrected Star Formation Rates in Galaxies with Outer Rings

    NASA Astrophysics Data System (ADS)

    Kostiuk, I.; Silchenko, O.

    2018-03-01

    The star formation rates SFR, as well as the SFR surface densities ΣSFR and absolute stellar magnitudes MAB, are determined and corrected for interinsic dust absorption for 34 disk galaxies of early morphological types with an outer ring structure and ultraviolet emission from the ring. These characteristic are determined for the outer ring structures and for the galaxies as a whole. Data from the space telescopes GALEX (in the NUV and FUV ultraviolet ranges) and WISE (in the W4 22 μm infrared band) are used. The average relative deviation in the corrected SFR and ΣSFR derived from the NUV and FUV bands is only 19.0%, so their averaged values are used for statistical consideration. The relations between the dust-corrected SFR characteristics, UV colours, the galaxy morphological type, absolute magnitude are illustrated.

  18. Photon scattering cross sections of H2 and He measured with synchrotron radiation

    NASA Technical Reports Server (NTRS)

    Ice, G. E.

    1977-01-01

    Total (elastic + inelastic) differential photon scattering cross sections have been measured for H2 gas and He, using an X-ray beam. Absolute measured cross sections agree with theory within the probable errors. Relative cross sections (normalized to theory at large S) agree to better than one percent with theoretical values calculated from wave functions that include the effect of electron-electron Coulomb correlation, but the data deviate significantly from theoretical independent-particle (e.g., Hartree-Fock) results. The ratios of measured absolute He cross sections to those of H2, at any given S, also agree to better than one percent with theoretical He-to-H2 cross-section ratios computed from correlated wave functions. It appears that photon scattering constitutes a very promising tool for probing electron correlation in light atoms and molecules.

  19. The use of fractional orders in the determination of birefringence of highly dispersive materials by the channelled spectrum method

    NASA Astrophysics Data System (ADS)

    Nagarajan, K.; Shashidharan Nair, C. K.

    2007-07-01

    The channelled spectrum employing polarized light interference is a very convenient method for the study of dispersion of birefringence. However, while using this method, the absolute order of the polarized light interference fringes cannot be determined easily. Approximate methods are therefore used to estimate the order. One of the approximations is that the dispersion of birefringence across neighbouring integer order fringes is negligible. In this paper, we show how this approximation can cause errors. A modification is reported whereby the error in the determination of absolute fringe order can be reduced using fractional orders instead of integer orders. The theoretical background for this method supported with computer simulation is presented. An experimental arrangement implementing these modifications is described. This method uses a Constant Deviation Spectrometer (CDS) and a Soleil Babinet Compensator (SBC).

  20. Comparison of the temperature accuracy between smart phone based and high-end thermal cameras using a temperature gradient phantom

    NASA Astrophysics Data System (ADS)

    Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.

    2017-03-01

    Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.

  1. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  2. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook

    2015-03-07

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less

  3. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  4. A QSPR model for prediction of diffusion coefficient of non-electrolyte organic compounds in air at ambient condition.

    PubMed

    Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi

    2012-03-01

    Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Imaging of Spontaneous and Traumatic Cervical Artery Dissection : Comparison of Typical CT Angiographic Features.

    PubMed

    Sporns, Peter B; Niederstadt, Thomas; Heindel, Walter; Raschke, Michael J; Hartensuer, René; Dittrich, Ralf; Hanning, Uta

    2018-01-26

    Cervical artery dissection (CAD) is an important etiology of ischemic stroke and early recognition is vital to protect patients from the major complication of cerebral embolization by administration of anticoagulants. The etiology of arterial dissections differ and can be either spontaneous or traumatic. Even though the historical gold standard is still catheter angiography, recent studies suggest a good performance of computed tomography angiography (CTA) for detection of CAD. We conducted this research to evaluate the variety and frequency of possible imaging signs of spontaneous and traumatic CAD and to guide neuroradiologists' decision making. Retrospective review of the database of our multiple injured patients admitted to the Department of Trauma, Hand, and Reconstructive Surgery of the University Hospital Münster in Germany (a level 1 trauma center) for patients with traumatic CAD (tCAD) and of our stroke database (2008-2015) for patients with spontaneous CAD (sCAD) and CT/CTA on initial clinical work-up. All images were evaluated concerning specific and sensitive radiological features for dissection by two experienced neuroradiologists. Imaging features were compared between the two etiologies. This study included 145 patients (99 male, 46 female; 45 ± 18.8 years of age), consisting of 126 dissected arteries with a traumatic and 43 with spontaneous etiology. Intimal flaps were more frequently observed after traumatic etiology (58.1% tCADs, 6.9% sCADs; p < 0.001); additionally, multivessel dissections were much more frequent in trauma patients (3 sCADs, 21 tCADs) and only less than half (42%) of the patients with traumatic dissections showed cervical spine fractures. Neuroradiologists should be aware that intimal flaps and multivessel dissections are more common after a traumatic etiology. In addition, it seems important to conduct a CTA in a trauma setting, even if no cervical spine fracture is detected.

  6. The Assessment of Cough in a Sarcoidosis Clinic Using a Validated instrument and a Visual Analog Scale.

    PubMed

    Judson, Marc A; Chopra, Amit; Conuel, Edward; Koutroumpakis, Efstratios; Schafer, Christopher; Austin, Adam; Zhang, Robert; Cao, Kerry; Berry, Rani; Khan, Malik M H S; Modi, Aakash; Modi, Ritu; Jou, Stephanie; Ilyas, Furqan; Yucel, Recai M

    2017-10-01

    Cough is a common symptom of pulmonary sarcoidosis. We analyzed the severity of cough and factors associated with cough in a university sarcoidosis clinic cohort. Consecutive patients completed the Leicester Cough Questionnaire (LCQ) and a cough visual analog scale (VAS). Clinical and demographic data were collected. Means of the LCQ were analyzed in patients who had multiple visits in terms of constant variables (e.g., race, sex). 355 patients completed the LCQ and VAS at 874 visits. Cough was significantly worse in blacks than whites as determined by the LCQ-mean (16.5 ± 2.6 vs. 17.8 ± 3.0, p < 0.001) and VAS-mean (3.8 ± 3.0 vs. 2.0 ± 2.6, p < 0.0001). Cough was worse in women than men as measured by the VAS-mean (2.7 ± 2.9 vs. 2.2 ± 2.7, p = 0.002), one of the LCQ-mean domains (LCQ-Social-mean 5.4 ± 0.9 vs. 5.2 ± 1.0, p = 0.03), but not the total LCQ-mean score. Cough was not significantly different by either measure in terms of smoking status, age, or spirometric parameter (FVC % predicted, FEV1 % predicted, FEV1/FVC). In a multivariable linear regression analysis, cough was significantly worse in blacks than whites and in pulmonary sarcoidosis than non-pulmonary sarcoidosis with both cough measures, in women than men for the VAS only, and not for spirometric parameters, Scadding stage, or age. The LCQ and VAS were strongly correlated. In a large university outpatient sarcoidosis cohort, cough was worse in blacks than whites. Cough was not statistically significantly different in terms of age, spirometric measures, Scadding stage, or smoking status. The LCQ correlated strongly with a visual analog scale for cough.

  7. Functional Linear Model with Zero-value Coefficient Function at Sub-regions.

    PubMed

    Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin

    2013-01-01

    We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.

  8. Age-specific absolute and relative organ weight distributions for B6C3F1 mice.

    PubMed

    Marino, Dale J

    2012-01-01

    The B6C3F1 mouse is the standard mouse strain used in toxicology studies conducted by the National Cancer Institute (NCI) and the National Toxicology Program (NTP). While numerous reports have been published on growth, survival, and tumor incidence, no overall compilation of organ weight data is available. Importantly, organ weight change is an endpoint used by regulatory agencies to develop toxicity reference values (TRVs) for use in human health risk assessments. Furthermore, physiologically based pharmacokinetic (PBPK) models, which utilize relative organ weights, are increasingly being used to develop TRVs. Therefore, all available absolute and relative organ weight data for untreated control B6C3F1 mice were collected from NCI/NTP studies in order to develop age-specific distributions. Results show that organ weights were collected more frequently in NCI/NTP studies at 2-wk (60 studies), 3-mo (147 studies), and 15-mo (40 studies) intervals than at other intervals, and more frequently from feeding and inhalation than drinking water studies. Liver, right kidney, lung, heart, thymus, and brain weights were most frequently collected. From the collected data, the mean and standard deviation for absolute and relative organ weights were calculated. Results show age-related increases in absolute liver, right kidney, lung, and heart weights and relatively stable brain and right testis weights. The results suggest a general variability trend in absolute organ weights of brain < right testis < right kidney < heart < liver < lung < spleen < thymus. This report describes the results of this effort.

  9. Solid-gas phase equilibria and thermodynamic properties of cadmium selenide.

    NASA Technical Reports Server (NTRS)

    Sigai, A. G.; Wiedemeier, H.

    1972-01-01

    Accurate vapor pressures are determined through direct weight loss measurements using the Knudsen effusion technique. The experimental data are evaluated by establishing the mode of vaporization and determining the heat capacity of cadmium selenide at elevated temperatures. Additional information is obtained through a second- and third-law evaluation of data, namely, the heat of formation and the absolute entropy of cadmium selenide. A preferential loss of selenium during the initial heating of CdSe is observed, which leads to a deviation in stoichiometry.

  10. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  11. Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  12. Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  13. Checking ozone amounts by measurements of UV-irradiances

    NASA Technical Reports Server (NTRS)

    Seckmeyer, Gunther; Kettner, Christiane; Thiel, Stephen

    1994-01-01

    Absolute measurements of UV-irradiances in Germany and New Zealand are used to determine the total amounts of ozone. UV-irradiances measured and calculated for clear skies and for solar zenith angles less than 60 deg generally show a good accordance. The UVB-irradiances, however, show that the actual Dobson values are about 5 percent higher in Germany and about 3 percent higher in New Zealand compared to those obtained by our method. Possible reasons for these deviations are discussed.

  14. Sex Ratios at Birth and Environmental Temperatures

    NASA Astrophysics Data System (ADS)

    Lerchl, Alexander

    The relationship between average monthly air temperature and sex ratios at birth (SRB) was analyzed for children born in Germany during the period 1946-1995. Both the absolute temperature and - more markedly - the monthly temperature deviations from the overall mean were significantly positively correlated with the SRB (P<0.01) when temperatures were time-lagged against the SRB data by -10 or -11months. It is concluded that the sex of the offspring is partially determined by environmental temperatures prior to conception.

  15. [Systemic approach to radiobiological studies].

    PubMed

    Bulanova, K Ia; Lobanok, L M

    2004-01-01

    The principles of information theory were applied for analysis of radiobiological effects. The perception of ionizing radiations as a signal enables living organism to discern their benefits or harm, to react to absolute and relatively small deviations, to keep the logic and chronicle of events, to use the former experience for reacting in presence, to forecast consequences. The systemic analysis of organism's response to ionizing radiations allows explaining the peculiarities of effects of different absorbed doses, hormesis, apoptosis, remote consequences and other post-radiation effects.

  16. Measurement of the aerothermodynamic state in a high enthalpy plasma wind-tunnel flow

    NASA Astrophysics Data System (ADS)

    Hermann, Tobias; Löhle, Stefan; Zander, Fabian; Fasoulas, Stefanos

    2017-11-01

    This paper presents spatially resolved measurements of absolute particle densities of N2, N2+, N, O, N+ , O+ , e- and excitation temperatures of electronic, rotational and vibrational modes of an air plasma free stream. All results are based on optical emission spectroscopy data. The measured parameters are combined to determine the local mass-specific enthalpy of the free stream. The analysis of the radiative transport, relative and absolute intensities, and spectral shape is used to determine various thermochemical parameters. The model uncertainty of each analysis method is assessed. The plasma flow is shown to be close to equilibrium. The strongest deviations from equilibrium occur for N, N+ and N2+ number densities in the free stream. Additional measurements of the local mass-specific enthalpy are conducted using a mass injection probe as well as a heat flux and total pressure probe. The agreement between all methods of enthalpy determination is good.

  17. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  18. ORBITAL SOLUTIONS AND ABSOLUTE ELEMENTS OF THE ECLIPSING BINARY EE AQUARII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wronka, Marissa Diehl; Gold, Caitlin; Sowell, James R.

    2010-04-15

    EE Aqr is a 7.9 mag Algol variable with a 12 hr orbital period. The Wilson-Devinney program is used to simultaneously solve 11 previously published light curves together with two existing radial velocity curves. The resulting masses are M {sub 1} = 2.24 {+-} 0.13 M {sub sun} and M {sub 2} = 0.72 {+-} 0.04 M {sub sun}, and the radii are R {sub 1} = 1.76 {+-} 0.03 R {sub sun} and R {sub 2} = 1.10 {+-} 0.02 R {sub sun}. The system has the lower-mass component completely filling its Roche lobe. Its distance from Hipparcos observationsmore » is 112 {+-} 10 pc. An improved ephemeris is derived, and no deviations in the period over time were seen. Light and velocity curve parameters, orbital elements, and absolute dimensions are presented, plus a comparison is made with previous solutions.« less

  19. Self-mixing instrument for simultaneous distance and speed measurement

    NASA Astrophysics Data System (ADS)

    Norgia, Michele; Melchionni, Dario; Pesatori, Alessandro

    2017-12-01

    A novel instrument based on Self-mixing interferometry is proposed to simultaneously measure absolute distance and velocity. The measurement method is designed for working directly on each kind of surface, in industrial environment, overcoming also problems due to speckle pattern effect. The laser pump current is modulated at quite high frequency (40 kHz) and the estimation of the induced fringes frequency allows an almost instantaneous measurement (measurement time equal to 25 μs). A real time digital elaboration processes the measurement data and discards unreliable measurements. The simultaneous measurement reaches a relative standard deviation of about 4·10-4 in absolute distance, and 5·10-3 in velocity measurement. Three different laser sources are tested and compared. The instrument shows good performances also in harsh environment, for example measuring the movement of an opaque iron tube rotating under a running water flow.

  20. Early results from the Far Infrared Absolute Spectrophotometer (FIRAS)

    NASA Technical Reports Server (NTRS)

    Mather, J. C.; Cheng, E. S.; Shafer, R. A.; Eplee, R. E.; Isaacman, R. B.; Fixsen, D. J.; Read, S. M.; Meyer, S. S.; Weiss, R.; Wright, E. L.

    1991-01-01

    The Far Infrared Absolute Spectrophotometer (FIRAS) on the Cosmic Background Explorer (COBE) mapped 98 percent of the sky, 60 percent of it twice, before the liquid helium coolant was exhausted. The FIRAS covers the frequency region from 1 to 100/cm with a 7 deg angular resolution. The spectral resolution is 0.2/cm for frequencies less than 20/cm and 0.8/cm for higher frequencies. Preliminary results include: a limit on the deviations from a Planck curve of 1 percent of the peak brightness from 1 to 20/cm, a temperature of 2.735 +/- 0.06 K, a limit on the Comptonization parameter y of 0.001, on the chemical potential parameter mu of 0.01, a strong limit on the existence of a hot smooth intergalactic medium, and a confirmation that the dipole anisotropy spectrum is that of a Doppler shifted blackbody.

  1. Real-Gas Correction Factors for Hypersonic Flow Parameters in Helium

    NASA Technical Reports Server (NTRS)

    Erickson, Wayne D.

    1960-01-01

    The real-gas hypersonic flow parameters for helium have been calculated for stagnation temperatures from 0 F to 600 F and stagnation pressures up to 6,000 pounds per square inch absolute. The results of these calculations are presented in the form of simple correction factors which must be applied to the tabulated ideal-gas parameters. It has been shown that the deviations from the ideal-gas law which exist at high pressures may cause a corresponding significant error in the hypersonic flow parameters when calculated as an ideal gas. For example the ratio of the free-stream static to stagnation pressure as calculated from the thermodynamic properties of helium for a stagnation temperature of 80 F and pressure of 4,000 pounds per square inch absolute was found to be approximately 13 percent greater than that determined from the ideal-gas tabulation with a specific heat ratio of 5/3.

  2. The NIST Detector-Based Luminous Intensity Scale

    PubMed Central

    Cromer, C. L.; Eppeldauer, G.; Hardis, J. E.; Larason, T. C.; Ohno, Y.; Parr, A. C.

    1996-01-01

    The Système International des Unités (SI) base unit for photometry, the candela, has been realized by using absolute detectors rather than absolute sources. This change in method permits luminous intensity calibrations of standard lamps to be carried out with a relative expanded uncertainty (coverage factor k = 2, and thus a 2 standard deviation estimate) of 0.46 %, almost a factor-of-two improvement. A group of eight reference photometers has been constructed with silicon photodiodes, matched with filters to mimic the spectral luminous efficiency function for photopic vision. The wide dynamic range of the photometers aid in their calibration. The components of the photometers were carefully measured and selected to reduce the sources of error and to provide baseline data for aging studies. Periodic remeasurement of the photometers indicate that a yearly recalibration is required. The design, characterization, calibration, evaluation, and application of the photometers are discussed. PMID:27805119

  3. Absolute S- and P-plane polarization efficiencies for high frequency holographic gratings in the VUV

    NASA Technical Reports Server (NTRS)

    Caruso, A. J.; Woodgate, B. E.; Mount, G. H.

    1981-01-01

    High frequency plane gratings (3500 and 3600 gr/mm) have been holographically ruled and blazed for the VUV spectral region. All gratings were coated with 70 nm Al + 25 nm MgF2. Absolute unpolarized and S- and P-plane polarization efficiencies have been measured for the first and second orders in the 120- to 450-nm spectral region at 18.5 and 30 deg angles of deviation. For deep grooves, anomalous features are more pronounced for the P-plane polarization efficiency than for the S-plane polarization efficiency. Holographic gratings can be tailored to produce high polarization or low polarization in the VUV. For comparison, efficiencies and polarization of the best conventional high frequency gratings were also determined. Measurements show that scattered light is significantly lower for holographic gratings in the VUV when compared with the conventional gratings.

  4. Position of the station Borowiec in the Doppler observation campaign WEDOC 80

    NASA Astrophysics Data System (ADS)

    Pachelski, W.

    The position of the Doppler antenna located at the Borowiec Observatory, Poland, is analyzed based on data gathered during the WEDOC 80 study and an earlier study in 1977. Among other findings, it is determined that biases of the reference system origin can be partially eliminated by transforming absolute coordinates of two or more stations into station-to-station vector components, and by determining the vector length while the system scale remains affected by broadcast ephemerides. The standard deviations of absolute coordinates are shown to represent only the internal accuracy of the solution, and are found to depend on the geometrical configuration between the station position and the satellite passes. It is shown that significant correlations between station coordinates in translocation or multilocation are due to the poor conditioning of design matrices with respect to the origin and orientation of the reference system.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Lu, S; Qin, Y

    Purpose: To evaluate the dosimetric uncertainty associated with Gafchromic (EBT3) films and establish an absolute dosimetry protocol for Stereotactic Radiosurgery (SRS) and Stereotactic Body Radiotherapy (SBRT). Methods: EBT3 films were irradiated at each of seven different dose levels between 1 and 15 Gy with open fields, and standard deviations of dose maps were calculated at each color channel for evaluation. A scanner non-uniform response correction map was built by registering and comparing film doses to the reference diode array-based dose map delivered with the same doses. To determine the temporal dependence of EBT3 films, the average correction factors of differentmore » dose levels as a function of time were evaluated up to four days after irradiation. An integrated film dosimetry protocol was developed for dose calibration, calibration curve fitting, dose mapping, and profile/gamma analysis. Patient specific quality assurance (PSQA) was performed for 93 SRS/SBRT treatment plans. Results: The scanner response varied within 1% for the field sizes less than 5 × 5 cm{sup 2}, and up to 5% for the field sizes of 10 × 10 cm{sup 2}. The scanner correction method was able to remove visually evident, irregular detector responses found for larger field sizes. The dose response of the film changed rapidly (∼10%) in the first two hours and plateaued afterwards, ∼3% change between 2 and 24 hours. The mean uncertainties (mean of the standard deviations) were <0.5% over the dose range 1∼15Gy for all color channels for the OD response curves. The percentage of points passing the 3%/1mm gamma criteria based on absolute dose analysis, averaged over all tests, was 95.0 ± 4.2. Conclusion: We have developed an absolute film dose dosimetry protocol using EBT3 films. The overall uncertainty has been established to be approximately 1% for SRS and SBRT PSQA. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  6. Absolute backscatter coefficient estimates of tissue-mimicking phantoms in the 5–50 MHz frequency range

    PubMed Central

    McCormick, Matthew M.; Madsen, Ernest L.; Deaner, Meagan E.; Varghese, Tomy

    2011-01-01

    Absolute backscatter coefficients in tissue-mimicking phantoms were experimentally determined in the 5–50 MHz frequency range using a broadband technique. A focused broadband transducer from a commercial research system, the VisualSonics Vevo 770, was used with two tissue-mimicking phantoms. The phantoms differed regarding the thin layers covering their surfaces to prevent desiccation and regarding glass bead concentrations and diameter distributions. Ultrasound scanning of these phantoms was performed through the thin layer. To avoid signal saturation, the power spectra obtained from the backscattered radio frequency signals were calibrated by using the signal from a liquid planar reflector, a water-brominated hydrocarbon interface with acoustic impedance close to that of water. Experimental values of absolute backscatter coefficients were compared with those predicted by the Faran scattering model over the frequency range 5–50 MHz. The mean percent difference and standard deviation was 54% ± 45% for the phantom with a mean glass bead diameter of 5.40 μm and was 47% ± 28% for the phantom with 5.16 μm mean diameter beads. PMID:21877789

  7. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  8. High levels of ultraviolet radiation observed by ground-based instruments below the 2011 Arctic ozone hole

    NASA Astrophysics Data System (ADS)

    Bernhard, G.; Dahlback, A.; Fioletov, V.; Heikkilä, A.; Johnsen, B.; Koskela, T.; Lakkala, K.; Svendby, T.

    2013-11-01

    Greatly increased levels of ultraviolet (UV) radiation were observed at thirteen Arctic and sub-Arctic ground stations in the spring of 2011, when the ozone abundance in the Arctic stratosphere dropped to the lowest amounts on record. Measurements of the noontime UV Index (UVI) during the low-ozone episode exceeded the climatological mean by up to 77% at locations in the western Arctic (Alaska, Canada, Greenland) and by up to 161% in Scandinavia. The UVI measured at the end of March at the Scandinavian sites was comparable to that typically observed 15-60 days later in the year when solar elevations are much higher. The cumulative UV dose measured during the period of the ozone anomaly exceeded the climatological mean by more than two standard deviations at 11 sites. Enhancements beyond three standard deviations were observed at seven sites and increases beyond four standard deviations at two sites. At the western sites, the episode occurred in March, when the Sun was still low in the sky, limiting absolute UVI anomalies to less than 0.5 UVI units. At the Scandinavian sites, absolute UVI anomalies ranged between 1.0 and 2.2 UVI units. For example, at Finse, Norway, the noontime UVI on 30 March was 4.7, while the climatological UVI is 2.5. Although a UVI of 4.7 is still considered moderate, UV levels of this amount can lead to sunburn and photokeratitis during outdoor activity when radiation is reflected upward by snow towards the face of a person or animal. At the western sites, UV anomalies can be well explained with ozone anomalies of up to 41% below the climatological mean. At the Scandinavian sites, low ozone can only explain a UVI increase of 50-60%. The remaining enhancement was mainly caused by the absence of clouds during the low-ozone period.

  9. High levels of ultraviolet radiation observed by ground-based instruments below the 2011 Arctic ozone hole

    NASA Astrophysics Data System (ADS)

    Bernhard, G.; Dahlback, A.; Fioletov, V.; Heikkilä, A.; Johnsen, B.; Koskela, T.; Lakkala, K.; Svendby, T. M.

    2013-06-01

    Greatly increased levels of ultraviolet (UV) radiation were observed at thirteen Arctic and sub-Arctic ground stations in the spring of 2011 when the ozone abundance in the Arctic stratosphere dropped to the lowest amounts on record. Measurements of the noontime UV Index (UVI) during the low-ozone episode exceeded the climatological mean by up to 77% at locations in the western Arctic (Alaska, Canada, Greenland) and by up to 161% in Scandinavia. The UVI measured at the end of March at the Scandinavian sites was comparable to that typically observed 15-60 days later in the year when solar elevations are much higher. The cumulative UV dose measured during the period of the ozone anomaly exceeded the climatological mean by more than two standard deviations at 11 sites. Enhancements beyond three standard deviations were observed at seven sites and increases beyond four standard deviations at two sites. At the western sites, the episode occurred in March when the Sun was still low in the sky, limiting absolute UVI anomalies to less than 0.5 UVI units. At the Scandinavian sites, absolute UVI anomalies ranged between 1.0 and 2.2 UVI units. For example, at Finse, Norway, the noontime UVI on 30 March was 4.7 while the climatological UVI is 2.5. Although a UVI of 4.7 is still considered moderate, UV levels of this amount can lead to sunburn and photokeratitis during outdoor activity when radiation is reflected upward by snow towards the face of a person or animal. At the western sites, UV anomalies can be well explained with ozone anomalies of up to 41% below the climatological mean. At the Scandinavian sites, low ozone can only explain a UVI increase by 50-60%. The remaining enhancement was mainly caused by the absence of clouds during the low-ozone period.

  10. Comparison of the Thermal Expansion Behavior of Several Intermetallic Silicide Alloys Between 293 and 1523 K

    NASA Technical Reports Server (NTRS)

    Raj, Sai V.

    2014-01-01

    Thermal expansion measurements were conducted on hot-pressed CrSi(sub 2), TiSi(sub 2), W Si(sub 2) and a two-phase Cr-Mo-Si intermetallic alloy between 293 and 1523 K during three heat-cool cycles. The corrected thermal expansion, (L/L(sub 0)(sub thermal), varied with the absolute temperature, T, as (deltaL/L(sub 0)(sub thermal) = A(T-293)(sup 3) + B(T-293)(sup 2) + C(T-293) + D, where A, B, C and D are regression constants. Excellent reproducibility was observed for most of the materials after the first heat-up cycle. In some cases, the data from the first heatup cycle deviated from those determined in the subsequent cycles. This deviation was attributed to the presence of residual stresses developed during processing, which are relieved after the first heat-up cycle.

  11. Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching

    PubMed Central

    2012-01-01

    Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144

  12. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections.

    PubMed

    Zhang, You; Yin, Fang-Fang; Ren, Lei

    2015-08-01

    Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.

  13. Approximations to complete basis set-extrapolated, highly correlated non-covalent interaction energies.

    PubMed

    Mackie, Iain D; DiLabio, Gino A

    2011-10-07

    The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics

  14. CAD/CAM produces dentures with improved fit.

    PubMed

    Steinmassl, Otto; Dumfahrt, Herbert; Grunert, Ingrid; Steinmassl, Patricia-Anca

    2018-02-22

    Resin polymerisation shrinkage reduces the congruence of the denture base with denture-bearing tissues and thereby decreases the retention of conventionally fabricated dentures. CAD/CAM denture manufacturing is a subtractive process, and polymerisation shrinkage is not an issue anymore. Therefore, CAD/CAM dentures are assumed to show a higher denture base congruence than conventionally fabricated dentures. It has been the aim of this study to test this hypothesis. CAD/CAM dentures provided by four different manufacturers (AvaDent, Merz Dental, Whole You, Wieland/Ivoclar) were generated from ten different master casts. Ten conventional dentures (pack and press, long-term heat polymerisation) made from the same master casts served as control group. The master casts and all denture bases were scanned and matched digitally. The absolute incongruences were measured using a 2-mm mesh. Conventionally fabricated dentures showed a mean deviation of 0.105 mm, SD = 0.019 from the master cast. All CAD/CAM dentures showed lower mean incongruences. From all CAD/CAM dentures, AvaDent Digital Dentures showed the highest congruence with the master cast surface with a mean deviation of 0.058 mm, SD = 0.005. Wieland Digital Dentures showed a mean deviation of 0.068 mm, SD = 0.005, Whole You Nexteeth prostheses showed a mean deviation of 0.074 mm, SD = 0.011 and Baltic Denture System prostheses showed a mean deviation of 0.086 mm, SD = 0.012. CAD/CAM produces dentures with better fit than conventional dentures. The present study explains the clinically observed enhanced retention and lower traumatic ulcer-frequency in CAD/CAM dentures.

  15. Cone-Beam Computed Tomography Assessment of Lower Facial Asymmetry in Unilateral Cleft Lip and Palate and Non-Cleft Patients with Class III Skeletal Relationship.

    PubMed

    Lin, Yifan; Chen, Gui; Fu, Zhen; Ma, Lian; Li, Weiran

    2015-01-01

    To evaluate, using cone-beam computed tomography (CBCT), both the condylar-fossa relationships and the mandibular and condylar asymmetries between unilateral cleft lip and palate (UCLP) patients and non-cleft patients with class III skeletal relationship, and to investigate the factors of asymmetry contributing to chin deviation. The UCLP and non-cleft groups consisted of 30 and 40 subjects, respectively, in mixed dentition with class III skeletal relationships. Condylar-fossa relationships and the dimensional and positional asymmetries of the condyles and mandibles were examined using CBCT. Intra-group differences were compared between two sides in both groups using a paired t-test. Furthermore, correlations between each measurement and chin deviation were assessed. It was observed that 90% of UCLP and 67.5% of non-cleft subjects had both condyles centered, and no significant asymmetry was found. The axial angle and the condylar center distances to the midsagittal plane were significantly greater on the cleft side than on the non-cleft side (P=0.001 and P=0.028, respectively) and were positively correlated with chin deviation in the UCLP group. Except for a larger gonial angle on the cleft side, the two groups presented with consistent asymmetries showing shorter mandibular bodies and total mandibular lengths on the cleft (deviated) side. The average chin deviation was 1.63 mm to the cleft side, and the average absolute chin deviation was significantly greater in the UCLP group than in the non-cleft group (P=0.037). Compared with non-cleft subjects with similar class III skeletal relationships, the subjects with UCLP showed more severe lower facial asymmetry. The subjects with UCLP presented with more asymmetrical positions and rotations of the condyles on axial slices, which were positively correlated with chin deviation.

  16. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  17. Comparing airborne and satellite retrievals of cloud optical thickness and particle effective radius using a spectral radiance ratio technique: two case studies for cirrus and deep convective clouds

    NASA Astrophysics Data System (ADS)

    Krisna, Trismono C.; Wendisch, Manfred; Ehrlich, André; Jäkel, Evelyn; Werner, Frank; Weigel, Ralf; Borrmann, Stephan; Mahnke, Christoph; Pöschl, Ulrich; Andreae, Meinrat O.; Voigt, Christiane; Machado, Luiz A. T.

    2018-04-01

    Solar radiation reflected by cirrus and deep convective clouds (DCCs) was measured by the Spectral Modular Airborne Radiation Measurement System (SMART) installed on the German High Altitude and Long Range Research Aircraft (HALO) during the Mid-Latitude Cirrus (ML-CIRRUS) and the Aerosol, Cloud, Precipitation, and Radiation Interaction and Dynamic of Convective Clouds System - Cloud Processes of the Main Precipitation Systems in Brazil: A Contribution to Cloud Resolving Modelling and to the Global Precipitation Measurement (ACRIDICON-CHUVA) campaigns. On particular flights, HALO performed measurements closely collocated with overpasses of the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite. A cirrus cloud located above liquid water clouds and a DCC topped by an anvil cirrus are analyzed in this paper. Based on the nadir spectral upward radiance measured above the two clouds, the optical thickness τ and particle effective radius reff of the cirrus and DCC are retrieved using a radiance ratio technique, which considers the cloud thermodynamic phase, the vertical profile of cloud microphysical properties, the presence of multilayer clouds, and the heterogeneity of the surface albedo. For the cirrus case, the comparison of τ and reff retrieved on the basis of SMART and MODIS measurements yields a normalized mean absolute deviation of up to 1.2 % for τ and 2.1 % for reff. For the DCC case, deviations of up to 3.6 % for τ and 6.2 % for reff are obtained. The larger deviations in the DCC case are mainly attributed to the fast cloud evolution and three-dimensional (3-D) radiative effects. Measurements of spectral upward radiance at near-infrared wavelengths are employed to investigate the vertical profile of reff in the cirrus. The retrieved values of reff are compared with corresponding in situ measurements using a vertical weighting method. Compared to the MODIS observations, measurements of SMART provide more information on the vertical distribution of particle sizes, which allow reconstructing the profile of reff close to the cloud top. The comparison between retrieved and in situ reff yields a normalized mean absolute deviation, which ranges between 1.5 and 10.3 %, and a robust correlation coefficient of 0.82.

  18. The influence of cooling forearm/hand and gender on estimation of handgrip strength.

    PubMed

    Cheng, Chih-Chan; Shih, Yuh-Chuan; Tsai, Yue-Jin; Chi, Chia-Fen

    2014-01-01

    Handgrip strength is essential in manual operations and activities of daily life, but the influence of forearm/hand skin temperature on estimation of handgrip strength is not well documented. Therefore, the present study intended to investigate the effect of local cooling of the forearm/hand on estimation of handgrip strength at various target force levels (TFLs, in percentage of MVC) for both genders. A cold pressor test was used to lower and maintain the hand skin temperature at 14°C for comparison with the uncooled condition. A total of 10 male and 10 female participants were recruited. The results indicated that females had greater absolute estimation deviations. In addition, both genders had greater absolute deviations in the middle range of TFLs. Cooling caused an underestimation of grip strength. Furthermore, a power function is recommended for establishing the relationship between actual and estimated handgrip force. Statement of relevance: Manipulation with grip strength is essential in daily life and the workplace, so it is important to understand the influence of lowering the forearm/hand skin temperature on grip-strength estimation. Females and the middle range of TFL had greater deviations. Cooling the forearm/hand tended to cause underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force. Practitioner Summary: It is important to understand the effect of lowering the forearm/hand skin temperature on grip-strength estimation. A cold pressor was used to cool the hand. The cooling caused underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force. Manipulation with grip strength is essential in daily life and the workplace, so it is important to understand the influence of lowering the forearm/hand skin temperature on grip-strength estimation. Females and the middle range of TFL had greater deviations. Cooling the forearm/hand tended to cause underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force. It is important to understand the effect of lowering the forearm/hand skin temperature on grip-strength estimation. A cold pressor was used to cool the hand. The cooling caused underestimation, and a power function is recommended for establishing the relationship between actual and estimated handgrip force

  19. SNPP VIIRS RSB Earth View Reflectance Uncertainty

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Twedt, Kevin; McIntire, Jeff; Xiong, Xiaoxiong

    2017-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite uses its 14 reflective solar bands to passively collect solar radiant energy reflected off the Earth. The Level 1 product is the geolocated and radiometrically calibrated top-of- the-atmosphere solar reflectance. The absolute radiometric uncertainty associated with this product includes contributions from the noise associated with measured detector digital counts and the radiometric calibration bias. Here, we provide a detailed algorithm for calculating the estimated standard deviation of the retrieved top-of-the-atmosphere spectral solar radiation reflectance.

  20. High-speed rangefinder for industrial application

    NASA Astrophysics Data System (ADS)

    Cavedo, Federico; Pesatori, Alessandro; Norgia, Michele

    2016-06-01

    The proposed work aims to improve one of the most used telemetry techniques to make absolute measurements of distance: the time of flight telemetry. The main limitation of the low-cost implementation of this technique is the low accuracy (some mm) and measurement rate (few measurements per second). In order to overcome these limits we modified the typical setup of this rangefinder exploiting low-cost telecommunication transceivers and radiofrequency synthesizers. The obtained performances are very encouraging, reaching a standard deviation of a few micrometers over the range of some meters.

  1. Hyperfine-resolved transition frequency list of fundamental vibration bands of H35Cl and H37Cl

    NASA Astrophysics Data System (ADS)

    Iwakuni, Kana; Sera, Hideyuki; Abe, Masashi; Sasada, Hiroyuki

    2014-12-01

    Sub-Doppler resolution spectroscopy of the fundamental vibration bands of H35Cl and H37Cl has been carried out from 87.1 to 89.9 THz. We have determined the absolute transition frequencies of the hyperfine-resolved R(0) to R(4) transitions with a typical uncertainty of 10 kHz. We have also yielded six molecular constants for each isotopomer in the vibrational excited state, which reproduce the determined frequencies with a standard deviation of about 10 kHz.

  2. Determination of carrier concentration and compensation microprofiles in GaAs

    NASA Technical Reports Server (NTRS)

    Jastrzebski, L.; Lagowski, J.; Walukiewicz, W.; Gatos, H. C.

    1980-01-01

    Simultaneous microprofiling of semiconductor free carrier, donor, and acceptor concentrations was achieved for the first time from the absolute value of the free carrier absorption coefficient and its wavelength dependence determined by IR absorption in a scanning mode. Employing Ge- and Si-doped melt-grown GaAs, striking differences were found between the variations of electron concentration and those of ionized impurity concentrations. These results showed clearly that the electronic characteristics of this material are controlled by amphoteric doping and deviations from stoichiometry rather than by impurity segregation.

  3. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    PubMed

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  4. The European Comparison of Absolute Gravimeters 2011 (ECAG-2011) in Walferdange, Luxembourg: results and recommendations

    NASA Astrophysics Data System (ADS)

    Francis, Olivier; Baumann, Henri; Volarik, Tomas; Rothleitner, Christian; Klein, Gilbert; Seil, Marc; Dando, Nicolas; Tracey, Ray; Ullrich, Christian; Castelein, Stefaan; Hua, Hu; Kang, Wu; Chongyang, Shen; Songbo, Xuan; Hongbo, Tan; Zhengyuan, Li; Pálinkás, Vojtech; Kostelecký, Jakub; Mäkinen, Jaakko; Näränen, Jyri; Merlet, Sébastien; Farah, Tristan; Guerlin, Christine; Pereira Dos Santos, Franck; Le Moigne, Nicolas; Champollion, Cédric; Deville, Sabrina; Timmen, Ludger; Falk, Reinhard; Wilmes, Herbert; Iacovone, Domenico; Baccaro, Francesco; Germak, Alessandro; Biolcati, Emanuele; Krynski, Jan; Sekowski, Marcin; Olszak, Tomasz; Pachuta, Andrzej; Agren, Jonas; Engfeldt, Andreas; Reudink, René; Inacio, Pedro; McLaughlin, Daniel; Shannon, Geoff; Eckl, Marc; Wilkins, Tim; van Westrum, Derek; Billson, Ryan

    2013-06-01

    We present the results of the third European Comparison of Absolute Gravimeters held in Walferdange, Grand Duchy of Luxembourg, in November 2011. Twenty-two gravimeters from both metrological and non-metrological institutes are compared. For the first time, corrections for the laser beam diffraction and the self-attraction of the gravimeters are implemented. The gravity observations are also corrected for geophysical gravity changes that occurred during the comparison using the observations of a superconducting gravimeter. We show that these corrections improve the degree of equivalence between the gravimeters. We present the results for two different combinations of data. In the first one, we use only the observations from the metrological institutes. In the second solution, we include all the data from both metrological and non-metrological institutes. Those solutions are then compared with the official result of the comparison published previously and based on the observations of the metrological institutes and the gravity differences at the different sites as measured by non-metrological institutes. Overall, the absolute gravity meters agree with one another with a standard deviation of 3.1 µGal. Finally, the results of this comparison are linked to previous ones. We conclude with some important recommendations for future comparisons.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X.; Senftleben, A.; Pflueger, T.

    Absolutely normalized (e,2e) measurements for H{sub 2} and He covering the full solid angle of one ejected electron are presented for 16 eV sum energy of both final state continuum electrons. For both targets rich cross-section structures in addition to the binary and recoil lobes are identified and studied as a function of the fixed electron's emission angle and the energy sharing among both electrons. For H{sub 2} their behavior is consistent with multiple scattering of the projectile as discussed before [Al-Hagan et al., Nature Phys. 5, 59 (2009)]. For He the binary and recoil lobes are significantly larger thanmore » for H{sub 2} and partly cover the multiple scattering structures. To highlight these patterns we propose a alternative representation of the triply differential cross section. Nonperturbative calculations are in good agreement with the He results and show discrepancies for H{sub 2} in the recoil peak region. For H{sub 2} a perturbative approach reasonably reproduces the cross-section shape but deviates in absolute magnitude.« less

  6. Frequency comb calibrated frequency-sweeping interferometry for absolute group refractive index measurement of air.

    PubMed

    Yang, Lijun; Wu, Xuejian; Wei, Haoyun; Li, Yan

    2017-04-10

    The absolute group refractive index of air at 194061.02 GHz is measured in real time using frequency-sweeping interferometry calibrated by an optical frequency comb. The group refractive index of air is calculated from the calibration peaks of the laser frequency variation and the interference signal of the two beams passing through the inner and outer regions of a vacuum cell when the frequency of a tunable external cavity diode laser is scanned. We continuously measure the refractive index of air for 2 h, which shows that the difference between measured results and Ciddor's equation is less than 9.6×10-8, and the standard deviation of that difference is 5.9×10-8. The relative uncertainty of the measured refractive index of air is estimated to be 8.6×10-8. The data update rate is 0.2 Hz, making it applicable under conditions in which air refractive index fluctuates fast.

  7. Corsica: A Multi-Mission Absolute Calibration Site

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.

    2013-09-01

    In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.

  8. Predicting Stability Constants for Uranyl Complexes Using Density Functional Theory

    DOE PAGES

    Vukovic, Sinisa; Hay, Benjamin P.; Bryantsev, Vyacheslav S.

    2015-04-02

    The ability to predict the equilibrium constants for the formation of 1:1 uranyl:ligand complexes (log K 1 values) provides the essential foundation for the rational design of ligands with enhanced uranyl affinity and selectivity. We also use density functional theory (B3LYP) and the IEFPCM continuum solvation model to compute aqueous stability constants for UO 2 2+ complexes with 18 donor ligands. Theoretical calculations permit reasonably good estimates of relative binding strengths, while the absolute log K 1 values are significantly overestimated. Accurate predictions of the absolute log K 1 values (root mean square deviation from experiment < 1.0 for logmore » K 1 values ranging from 0 to 16.8) can be obtained by fitting the experimental data for two groups of mono and divalent negative oxygen donor ligands. The utility of correlations is demonstrated for amidoxime and imide dioxime ligands, providing a useful means of screening for new ligands with strong chelate capability to uranyl.« less

  9. Estimation of Local Bone Loads for the Volume of Interest.

    PubMed

    Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun

    2016-07-01

    Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.

  10. Regional variations in cancer survival: Impact of tumour stage, socioeconomic status, comorbidity and type of treatment in Norway.

    PubMed

    Skyrud, Katrine Damgaard; Bray, Freddie; Eriksen, Morten Tandberg; Nilssen, Yngvar; Møller, Bjørn

    2016-05-01

    Cancer survival varies by place of residence, but it remains uncertain whether this reflects differences in tumour, patient and treatment characteristics (including tumour stage, indicators of socioeconomic status (SES), comorbidity and information on received surgery and radiotherapy) or possibly regional differences in the quality of delivered health care. National population-based data from the Cancer Registry of Norway were used to identify cancer patients diagnosed in 2002-2011 (n = 258,675). We investigated survival from any type of cancer (all cancer sites combined), as well as for the six most common cancers. The effect of adjusting for prognostic factors on regional variations in cancer survival was examined by calculating the mean deviation, defined by the mean absolute deviation of the relative excess risks across health services regions. For prostate cancer, the mean deviation across regions was 1.78 when adjusting for age and sex only, but decreased to 1.27 after further adjustment for tumour stage. For breast cancer, the corresponding mean deviations were 1.34 and 1.27. Additional adjustment for other prognostic factors did not materially change the regional variation in any of the other sites. Adjustment for tumour stage explained most of the regional variations in prostate cancer survival, but had little impact for other sites. Unexplained regional variations after adjusting for tumour stage, SES indicators, comorbidity and type of treatment in Norway may be related to regional inequalities in the quality of cancer care. © 2015 UICC.

  11. The impact of water temperature on the measurement of absolute dose

    NASA Astrophysics Data System (ADS)

    Islam, Naveed Mehdi

    To standardize reference dosimetry in radiation therapy, Task Group 51 (TG 51) of American Association of Physicist's in Medicine (AAPM) recommends that dose calibration measurements be made in a water tank at a depth of 10 cm and at a reference geometry. Methodologies are provided for calculating various correction factors to be applied in calculating the absolute dose. However the protocol does not specify the water temperature to be used. In practice, the temperature of water during dosimetry may vary considerably between independent sessions and different centers. In this work the effect of water temperature on absolute dosimetry has been investigated. Density of water varies with temperature, which in turn may impact the beam attenuation and scatter properties. Furthermore, due to thermal expansion or contraction air volume inside the chamber may change. All of these effects can result in a change in the measurement. Dosimetric measurements were made using a Farmer type ion chamber on a Varian Linear Accelerator for 6 MV and 23 MV photon energies for temperatures ranging from 10 to 40 °C. A thermal insulation was designed for the water tank in order to maintain relatively stable temperature over the duration of the experiment. Dose measured at higher temperatures were found to be consistently higher by a very small magnitude. Although the differences in dose were less than the uncertainty in each measurement, a linear regression of the data suggests that the trend is statistically significant with p-values of 0.002 and 0.013 for 6 and 23 MV beams respectively. For a 10 degree difference in water phantom temperatures, which is a realistic deviation across clinics, the final calculated reference dose can differ by 0.24% or more. To address this effect, first a reference temperature (e.g.22 °C) can be set as the standard; subsequently a correction factor can be implemented for deviations from this reference. Such a correction factor is expected to be of similar magnitude as existing TG 51 recommended correction factors.

  12. Impact of the radiotherapy technique on the correlation between dose-volume histograms of the bladder wall defined on MRI imaging and dose-volume/surface histograms in prostate cancer patients

    NASA Astrophysics Data System (ADS)

    Maggio, Angelo; Carillo, Viviana; Cozzarini, Cesare; Perna, Lucia; Rancati, Tiziana; Valdagni, Riccardo; Gabriele, Pietro; Fiorino, Claudio

    2013-04-01

    The aim of this study was to evaluate the correlation between the ‘true’ absolute and relative dose-volume histograms (DVHs) of the bladder wall, dose-wall histogram (DWH) defined on MRI imaging and other surrogates of bladder dosimetry in prostate cancer patients, planned both with 3D-conformal and intensity-modulated radiation therapy (IMRT) techniques. For 17 prostate cancer patients, previously treated with radical intent, CT and MRI scans were acquired and matched. The contours of bladder walls were drawn by using MRI images. External bladder surfaces were then used to generate artificial bladder walls by performing automatic contractions of 5, 7 and 10 mm. For each patient a 3D conformal radiotherapy (3DCRT) and an IMRT treatment plan was generated with a prescription dose of 77.4 Gy (1.8 Gy/fr) and DVH of the whole bladder of the artificial walls (DVH-5/10) and dose-surface histograms (DSHs) were calculated and compared against the DWH in absolute and relative value, for both treatment planning techniques. A specific software (VODCA v. 4.4.0, MSS Inc.) was used for calculating the dose-volume/surface histogram. Correlation was quantified for selected dose-volume/surface parameters by the Spearman correlation coefficient. The agreement between %DWH and DVH5, DVH7 and DVH10 was found to be very good (maximum average deviations below 2%, SD < 5%): DVH5 showed the best agreement. The correlation was slightly better for absolute (R = 0.80-0.94) compared to relative (R = 0.66-0.92) histograms. The DSH was also found to be highly correlated with the DWH, although slightly higher deviations were generally found. The DVH was not a good surrogate of the DWH (R < 0.7 for most of parameters). When comparing the two treatment techniques, more pronounced differences between relative histograms were seen for IMRT with respect to 3DCRT (p < 0.0001).

  13. Reassessment of carotid intima-media thickness by standard deviation score in children and adolescents after Kawasaki disease.

    PubMed

    Noto, Nobutaka; Kato, Masataka; Abe, Yuriko; Kamiyama, Hiroshi; Karasawa, Kensuke; Ayusawa, Mamoru; Takahashi, Shori

    2015-01-01

    Previous studies that used carotid ultrasound have been largely conflicting in regards to whether or not patients after Kawasaki disease (KD) have a greater carotid intima-media thickness (CIMT) than controls. To test the hypothesis that there are significant differences between the values of CIMT expressed as absolute values and standard deviation scores (SDS) in children and adolescents after KD and controls, we reviewed 12 published articles regarding CIMT on KD patients and controls. The mean ± SD of absolute CIMT (mm) in the KD patients and controls obtained from each article was transformed to SDS (CIMT-SDS) using age-specific reference values established by Jourdan et al. (J: n = 247) and our own data (N: n = 175), and the results among these 12 articles were compared between the two groups and the references for comparison of racial disparities. There were no significant differences in mean absolute CIMT and mean CIMT-SDS for J between KD patients and controls (0.46 ± 0.06 mm vs. 0.44 ± 0.04 mm, p = 0.133, and 1.80 ± 0.84 vs. 1.25 ± 0.12, p = 0.159, respectively). However, there were significant differences in mean CIMT-SDS for N between KD patients and controls (0.60 ± 0.71 vs. 0.01 ± 0.65, p = 0.042). When we assessed the nine articles on Asian subjects, the difference of CIMT-SDS between the two groups was invariably significant only for N (p = 0.015). Compared with the reference values, CIMT-SDS of controls was within the normal range at a rate of 41.6 % for J and 91.6 % for N. These results indicate that age- and race-specific reference values for CIMT are mandatory for performing accurate assessment of the vascular status in healthy children and adolescents, particularly in those after KD considered at increased long-term cardiovascular risk.

  14. Reproducibility of visual acuity assessment in normal and low visual acuity.

    PubMed

    Becker, Ralph; Teichler, Gunnar; Gräf, Michael

    2007-01-01

    To assess the reproducibility of measurements of visual acuity in both the upper and lower range of visual acuity. The retroilluminated ETDRS 1 and ETDRS 2 charts (Precision Vision) were used for measurement of visual acuity. Both charts use the same letters. The sequence of the charts followed a pseudorandomized protocol. The examination distance was 4.0 m. When the visual acuity was below 0.16 or 0.03, then the examination distance was reduced to 1 m or 0.4 m, respectively, using an appropriate near correction. Visual acuity measurements obtained during the same session with both charts were compared. A total of 100 patients (age 8-90 years; median 60.5) with various eye disorders, including 39 with amblyopia due to strabismus, were tested in addition to 13 healthy volunteers (age 18-33 years; median 24). At least 3 out of 5 optotypes per line had to be correctly identified to pass this line. Wrong answers were monitored. The interpolated logMAR score was calculated. In the patients, the eye with the lower visual acuity was assessed, and for the healthy subjects the right eye. Differences between ETDRS 1 and ETDRS 2-acuity were compared. The mean logMAR values for ETDRS 1 and ETDRS 2 were -0.17 and -0.14 in the healthy eyes and 0.55 and 0.57 in the entire group. The absolute difference between ETDRS 1 and ETDRS 2 was (mean +/- standard deviation) 0.051 +/- 0.04 for the healthy eyes and 0.063 +/- 0.05 in the entire group. In the acuity range below 0.1 (logMAR > 1.0), the absolute difference (mean +/- standard deviation) between ETDRS 1 and ETDRS 2 of 0.072 +/- 0.04 did not significantly exceed the mean absolute difference in healthy eyes (p = 0.17). Regression analysis (|ETDRS 1 - ETDRS 2| vs. ETDRS 1) showed a slight increase of the difference between the two values with lower visual acuity (p = 0.0505; r = 0.18). Assuming correct measurement, the reproducibilty of visual acuity measurements in the lower acuity range is not significantly worse than in normals.

  15. Current Global Absolute Plate Velocities Inferred from the Trends of Hotspot Tracks: Implications for Motion between Groups of Hotspots and Comparison and Combination with Absolute Velocities Inferred from the Orientation of Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Wang, C.; Gordon, R. G.; Zheng, L.

    2016-12-01

    Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.

  16. G3(MP2)-CEP theory and applications for compounds containing atoms from representative first, second and third row elements of the periodic table.

    PubMed

    Pereira, Douglas Henrique; Rocha, Carlos Murilo Romero; Morgon, Nelson Henrique; Custodio, Rogério

    2015-08-01

    The compact effective potential (CEP) pseudopotential was adapted to the G3(MP2) theory, herein referred to as G3(MP2)-CEP, and applied to the calculation of enthalpies of formation, ionization energies, atomization energies, and electron and proton affinities for 446 species containing elements of the 1st, 2nd, and 3rd rows of the periodic table. A total mean absolute deviation of 1.67 kcal mol(-1) was achieved with G3(MP2)-CEP, compared with 1.47 kcal mol(-1) for G3(MP2). Electron affinities and enthalpies of formation are the properties exhibiting the lowest deviations with respect to the original G3(MP2) theory. The use of pseudopotentials and composite theories in the framework of the G3 theory is feasible and compatible with the all electron approach. Graphical Abstract Application of composite methods in high-level ab initio calculations.

  17. Viscosities of aqueous blended amines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, C.H.; Li, M.H.

    1997-07-01

    Solutions of alkanolamines are an industrially important class of compounds used in the natural gas, oil refineries, petroleum chemical plants, and synthetic ammonia industries for the removal of acidic components like CO{sub 2} and H{sub 2}S from gas streams. The viscosities of aqueous mixtures of diethanolamine (DEA) + N-methyldiethanolamine (MDEA), DEA + 2-amino-2-methyl-1-propanol (AMP), and monoethanolamine (MEA) + 2-piperidineethanol (2-PE) were measured from 30 C to 80 C. A Redlich-Kister equation for the viscosity deviation was applied to represent the viscosity. On the basis of the available viscosity data for five ternary systems, MEA + MDEA + H{sub 2}O, MEAmore » + AMP + H{sub 2}O, DEA + MDEA + H{sub 2}O, DEA + AMP + H{sub 2}O, and MEA + 2-PE + H{sub 2}O, a generalized set of binary parameters were determined. For the viscosity calculation of the systems tested, the overall average absolute percent deviation is about 1.0% for a total of 499 data points.« less

  18. Gravimetric method for the determination of diclofenac in pharmaceutical preparations.

    PubMed

    Tubino, Matthieu; De Souza, Rafael L

    2005-01-01

    A gravimetric method for the determination of diclofenac in pharmaceutical preparations was developed. Diclofenac is precipitated from aqueous solution with copper(II) acetate in pH 5.3 (acetic acid/acetate buffer). Sample aliquots had approximately the same quantity of the drug content in tablets (50 mg) or in ampules (75 mg). The observed standard deviation was about +/- 2 mg; therefore, the relative standard deviation (RSD) was approximately 4% for tablet and 3% for ampule preparations. The results were compared with those obtained with the liquid chromatography method recommended in the United States Pharmacopoeia using the statistical Student's t-test. Complete agreement was observed. It is possible to obtain more precise results using higher aliquots, for example 200 mg, in which case the RSD falls to 1%. This gravimetric method, contrary to what is expected for this kind of procedure, is relatively fast and simple to perform. The main advantage is the absolute character of the gravimetric analysis.

  19. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  20. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  1. Development of Colle-Salvetti type electron-nucleus correlation functional for MC-DFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udagawa, Taro; Tsuneda, Takao; Tachikawa, Masanori

    2015-12-31

    A Colle-Salvetti type electron-nucleus correlation functional for multicomponent density-functional theory is proposed. We demonstrate that our correlation functional quantitatively reproduces the quantum nuclear effects of protons; the mean absolute deviation value is 2.8 millihartrees for the optimized structure of hydrogen-containing molecules. We also show other practical calculations with our new electron-deuteron and electron-triton correlation functionals. Since this functional is derived without any unphysical assumption, the strategy taken in this development will be a promising recipe to make new functionals for the potentials of other particles’ interactions.

  2. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    NASA Astrophysics Data System (ADS)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  3. Asymptotics of nonparametric L-1 regression models with dependent data

    PubMed Central

    ZHAO, ZHIBIAO; WEI, YING; LIN, DENNIS K.J.

    2013-01-01

    We investigate asymptotic properties of least-absolute-deviation or median quantile estimates of the location and scale functions in nonparametric regression models with dependent data from multiple subjects. Under a general dependence structure that allows for longitudinal data and some spatially correlated data, we establish uniform Bahadur representations for the proposed median quantile estimates. The obtained Bahadur representations provide deep insights into the asymptotic behavior of the estimates. Our main theoretical development is based on studying the modulus of continuity of kernel weighted empirical process through a coupling argument. Progesterone data is used for an illustration. PMID:24955016

  4. Predicting Functional Capacity From Measures of Muscle Mass in Postmenopausal Women.

    PubMed

    Orsatti, Fábio Lera; Nunes, Paulo Ricardo Prado; Souza, Aletéia de Paula; Martins, Fernanda Maria; de Oliveira, Anselmo Alves; Nomelini, Rosekeila Simões; Michelin, Márcia Antoniazi; Murta, Eddie Fernando Cândido

    2017-06-01

    Menopause increases body fat and decreases muscle mass and strength, which contribute to sarcopenia. The amount of appendicular muscle mass has been frequently used to diagnose sarcopenia. Different measures of appendicular muscle mass have been proposed. However, no studies have compared the most salient measure (appendicular muscle mass corrected by body fat) of the appendicular muscle mass to physical function in postmenopausal women. To examine the association of 3 different measurements of appendicular muscle mass (absolute, corrected by stature, and corrected by body fat) with physical function in postmenopausal women. Cross-sectional descriptive study. Outpatient geriatric and gynecological clinic. Forty-eight postmenopausal women with a mean age (standard deviation [SD]) of 62.1 ± 8.2 years, with mean (SD) length of menopause of 15.7 ± 9.8 years and mean (SD) body fat of 43.6% ± 9.8%. Not applicable. Appendicular muscle mass measure was measured with dual-energy x-ray absorptiometry. Physical function was measured by a functional capacity questionnaire, a short physical performance battery, and a 6 minute-walk test. Muscle quality (leg extensor strength to lower-body mineral-free lean mass ratio) and sum of z scores (sum of each physical function tests z score) were performed to provide a global index of physical function. The regression analysis showed that appendicular muscle mass corrected by body fat was the strongest predictor of physical function. Each increase in the standard deviation of appendicular muscle mass corrected by body fat was associated with a mean sum of z score increase of 59% (standard deviation), whereas each increase in absolute appendicular muscle mass and appendicular muscle mass corrected by stature were associated with a mean sum of z scores decrease of 23% and 36%, respectively. Muscle quality was associated with appendicular muscle mass corrected by body fat. These findings indicate that appendicular muscle mass corrected by body fat is a better predictor of physical function than the other measures of appendicular muscle mass in postmenopausal women. I. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  5. Bronchoscopic modalities to diagnose sarcoidosis.

    PubMed

    Benzaquen, Sadia; Aragaki-Nakahodo, Alejandro Adolfo

    2017-09-01

    Several studies have investigated different bronchoscopic techniques to obtain tissue diagnosis in patients with suspected sarcoidosis when the diagnosis cannot be based on clinicoradiographic findings alone. In this review, we will describe the most recent and relevant evidence from different bronchoscopic modalities to diagnose sarcoidosis. Despite multiple available bronchoscopic modalities to procure tissue samples to diagnose sarcoidosis, the vast majority of evidence favors endobronchial ultrasound transbronchial needle aspiration to diagnose Scadding stages 1 and 2 sarcoidosis. Transbronchial lung cryobiopsy is a new technique that is mainly used to aid in the diagnosis of undifferentiated interstitial lung disease; however, we will discuss its potential use in sarcoidosis. This review illustrates the limited information about the different bronchoscopic techniques to aid in the diagnosis of pulmonary sarcoidosis. However, it demonstrates that the combination of available bronchoscopic techniques increases the diagnostic yield for suspected sarcoidosis.

  6. Development and evaluation of a prototype tracking system using the treatment couch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Stephanie, E-mail: stephanie.lang@usz.ch; Riesterer, Oliver; Klöck, Stephan

    2014-02-15

    Purpose: Tumor motion increases safety margins around the clinical target volume and leads to an increased dose to the surrounding healthy tissue. The authors have developed and evaluated a one-dimensional treatment couch tracking system to counter steer respiratory tumor motion. Three different motion detection sensors with different lag times were evaluated. Methods: The couch tracking system consists of a motion detection sensor, which can be the topometrical system Topos (Cyber Technologies, Germany), the respiratory gating system RPM (Varian Medical Systems) or a laser triangulation system (Micro Epsilon), and the Protura treatment couch (Civco Medical Systems). The control of the treatmentmore » couch was implemented in the block diagram environment Simulink (MathWorks). To achieve real time performance, the Simulink models were executed on a real time engine, provided by Real-Time Windows Target (MathWorks). A proportional-integral control system was implemented. The lag time of the couch tracking system using the three different motion detection sensors was measured. The geometrical accuracy of the system was evaluated by measuring the mean absolute deviation from the reference (static position) during motion tracking. This deviation was compared to the mean absolute deviation without tracking and a reduction factor was defined. A hexapod system was moving according to seven respiration patterns previously acquired with the RPM system as well as according to a sin{sup 6} function with two different frequencies (0.33 and 0.17 Hz) and the treatment table compensated the motion. Results: A prototype system for treatment couch tracking of respiratory motion was developed. The laser based tracking system with a small lag time of 57 ms reduced the residual motion by a factor of 11.9 ± 5.5 (mean value ± standard deviation). An increase in delay time from 57 to 130 ms (RPM based system) resulted in a reduction by a factor of 4.7 ± 2.6. The Topos based tracking system with the largest lag time of 300 ms achieved a mean reduction by a factor of 3.4 ± 2.3. The increase in the penumbra of a profile (1 × 1 cm{sup 2}) for a motion of 6 mm was 1.4 mm. With tracking applied there was no increase in the penumbra. Conclusions: Couch tracking with the Protura treatment couch is achievable. To reliably track all possible respiration patterns without prediction filters a short lag time below 100 ms is needed. More scientific work is necessary to extend our prototype to tracking of internal motion.« less

  7. A non-interventional comparative study of the 20:1 combination of cafedrine/theodrenaline versus ephedrine for the treatment of intra-operative arterial hypotension: the 'HYPOTENS' study design and rationale.

    PubMed

    Eberhart, Leopold; Geldner, Götz; Huljic, Susanne; Marggraf, Kerstin; Keller, Thomas; Koch, Tilo; Kranke, Peter

    2018-06-01

    To compare the effectiveness of 20:1 cafedrine/theodrenaline approved for use in Germany to ephedrine in the restoration of arterial blood pressure and on post-operative outcomes in patients with intra-operative arterial hypotension of any origin under standard clinical practice conditions. 'HYPOTENS' is a national, multi-center, prospective, open-label, two-armed, non-interventional study. Effectiveness and post-operative outcome following cafedrine/theodrenaline or ephedrine therapy will be evaluated in two cohorts of hypotensive patients. Cohort A includes patients aged ≥50 years with ASA-classification 2-4 undergoing non-emergency surgical procedures under general anesthesia. Cohort B comprises patients undergoing Cesarean section under spinal anesthesia. Participating surgical departments will be assigned to a treatment arm by routinely used anti-hypotensive agent. To minimize bias, matched department pairs will be compared in a stratified selection process. The composite primary end-point is the lower absolute deviation from individually determined target blood pressure (IDTBP) and the incidence of heart rate ≥100 beats/min in the first 15 min. Secondary end-points include incidence and degree of early post-operative delirium (cohort A), severity of fetal acidosis in the newborn (cohort B), upper absolute deviation from IDTBP, percentage increase in systolic blood pressure, and time to IDTBP. This open-label, non-interventional study design mirrors daily practice in the treatment of patients with intra-operative hypotension and ensures full treatment decision autonomy with respect to each patient's individual condition. Selection of participating sites by a randomization process addresses bias without interfering with the non-interventional nature of the study. First results are expected in 2018. ClinicalTrials.gov identifier: NCT02893241; DRKS identifier: DRKS00010740.

  8. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  9. Development of a transformation model to derive general population-based utility: Mapping the pruritus-visual analog scale (VAS) to the EQ-5D utility.

    PubMed

    Park, Sun-Young; Park, Eun-Ja; Suh, Hae Sun; Ha, Dongmun; Lee, Eui-Kyung

    2017-08-01

    Although nonpreference-based disease-specific measures are widely used in clinical studies, they cannot generate utilities for economic evaluation. A solution to this problem is to estimate utilities from disease-specific instruments using the mapping function. This study aimed to develop a transformation model for mapping the pruritus-visual analog scale (VAS) to the EuroQol 5-Dimension 3-Level (EQ-5D-3L) utility index in pruritus. A cross-sectional survey was conducted with a sample (n = 268) drawn from the general population of South Korea. Data were randomly divided into 2 groups, one for estimating and the other for validating mapping models. To select the best model, we developed and compared 3 separate models using demographic information and the pruritus-VAS as independent variables. The predictive performance was assessed using the mean absolute deviation and root mean square error in a separate dataset. Among the 3 models, model 2 using age, age squared, sex, and the pruritus-VAS as independent variables had the best performance based on the goodness of fit and model simplicity, with a log likelihood of 187.13. The 3 models had similar precision errors based on mean absolute deviation and root mean square error in the validation dataset. No statistically significant difference was observed between the mean observed and predicted values in all models. In conclusion, model 2 was chosen as the preferred mapping model. Outcomes measured as the pruritus-VAS can be transformed into the EQ-5D-3L utility index using this mapping model, which makes an economic evaluation possible when only pruritus-VAS data are available. © 2017 John Wiley & Sons, Ltd.

  10. G4CEP: A G4 theory modification by including pseudopotential for molecules containing first-, second- and third-row representative elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Cleuton de Souza; Instituto de Ciências Exatas e Tecnologia, Universidade Federal do Amazonas, Campus de Itacoatiara, 69100-021 Itacoatiara, Amazonas; Pereira, Douglas Henrique

    2016-05-28

    The G4CEP composite method was developed from the respective G4 all-electron version by considering the implementation of compact effective pseudopotential (CEP). The G3/05 test set was used as reference to benchmark the adaptation by treating in this work atoms and compounds from the first and second periods of the periodic table, as well as representative elements of the third period, comprising 440 thermochemical data. G4CEP has not reached a so high level of accuracy as the G4 all-electron theory. G4CEP presented a mean absolute error around 1.09 kcal mol{sup −1}, while the original method presents a deviation corresponding to 0.83more » kcal mol{sup −1}. The similarity of the optimized molecular geometries between G4 and G4CEP indicates that the core-electron effects and basis set adjustments may be pointed out as a significant factor responsible for the large discrepancies between the pseudopotential results and the experimental data, or even that the all-electron calculations are more efficient either in its formulation or in the cancellation of errors. When the G4CEP mean absolute error (1.09 kcal mol{sup −1}) is compared to 1.29 kcal mol{sup −1} from G3CEP, it does not seem so efficient. However, while the G3CEP uncertainty is ±4.06 kcal mol{sup −1}, the G4CEP deviation is ±2.72 kcal mol{sup −1}. Therefore, the G4CEP theory is considerably more reliable than any previous combination of composite theory and pseudopotential, particularly for enthalpies of formation and electron affinities.« less

  11. Extensive validation of CM SAF surface radiation products over Europe.

    PubMed

    Urraca, Ruben; Gracia-Amillo, Ana M; Koubli, Elena; Huld, Thomas; Trentmann, Jörg; Riihelä, Aku; Lindfors, Anders V; Palmer, Diane; Gottschalg, Ralph; Antonanzas-Torres, Fernando

    2017-09-15

    This work presents a validation of three satellite-based radiation products over an extensive network of 313 pyranometers across Europe, from 2005 to 2015. The products used have been developed by the Satellite Application Facility on Climate Monitoring (CM SAF) and are one geostationary climate dataset (SARAH-JRC), one polar-orbiting climate dataset (CLARA-A2) and one geostationary operational product. Further, the ERA-Interim reanalysis is also included in the comparison. The main objective is to determine the quality level of the daily means of CM SAF datasets, identifying their limitations, as well as analyzing the different factors that can interfere in the adequate validation of the products. The quality of the pyranometer was the most critical source of uncertainty identified. In this respect, the use of records from Second Class pyranometers and silicon-based photodiodes increased the absolute error and the bias, as well as the dispersion of both metrics, preventing an adequate validation of the daily means. The best spatial estimates for the three datasets were obtained in Central Europe with a Mean Absolute Deviation (MAD) within 8-13 W/m 2 , whereas the MAD always increased at high-latitudes, snow-covered surfaces, high mountain ranges and coastal areas. Overall, the SARAH-JRC's accuracy was demonstrated over a dense network of stations making it the most consistent dataset for climate monitoring applications. The operational dataset was comparable to SARAH-JRC in Central Europe, but lacked of the temporal stability of climate datasets, while CLARA-A2 did not achieve the same level of accuracy despite predictions obtained showed high uniformity with a small negative bias. The ERA-Interim reanalysis shows the by-far largest deviations from the surface reference measurements.

  12. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  13. Use of consensus development to establish national research priorities in critical care

    PubMed Central

    Vella, Keryn; Goldfrad, Caroline; Rowan, Kathy; Bion, Julian; Black, Nick

    2000-01-01

    Objectives To test the feasibility of using a nominal group technique to establish clinical and health services research priorities in critical care and to test the representativeness of the group's views. Design Generation of topics by means of a national survey; a nominal group technique to establish the level of consensus; a survey to test the representativeness of the results. Setting United Kingdom and Republic of Ireland. Subjects Nominal group composed of 10 doctors (8 consultants, 2 trainees) and 2 nurses. Main outcome measure Level of support (median) and level of agreement (mean absolute deviation from the median) derived from a 9 point Likert scale. Results Of the 325 intensive care units approached, 187 (58%) responded, providing about 1000 suggestions for research. Of the 106 most frequently suggested topics considered by the nominal group, 37 attracted strong support, 48 moderate support and 21 weak support. There was more agreement after the group had met—overall mean of the mean absolute deviations from the median fell from 1.41 to 1.26. The group's views represented the views of the wider community of critical care staff (r=0.73, P<0.01). There was no significant difference in the views of staff from teaching or from non-teaching hospitals. Of the 37 topics that attracted the strongest support, 24 were concerned with organisational aspects of critical care and only 13 with technology assessment or clinical research. Conclusions A nominal group technique is feasible and reliable for determining research priorities among clinicians. This approach is more democratic and transparent than the traditional methods used by research funding bodies. The results suggest that clinicians perceive research into the best ways of delivering and organising services as a high priority. PMID:10753149

  14. Low back pain and postural control, effects of task difficulty on centre of pressure and spinal kinematics.

    PubMed

    Schelldorfer, Sarah; Ernst, Markus Josef; Rast, Fabian Marcel; Bauer, Christoph Michael; Meichtry, André; Kool, Jan

    2015-01-01

    Association of low back pain and standing postural control (PC) deficits are reported inconsistently. Demands on PC adaptation strategies are increased by restraining the input of visual or somatosensory senses. The objectives of the current study are, to investigate whether PC adaptations of the spine, hip and the centre of pressure (COP) differ between patients reporting non-specific low back pain (NSLBP) and asymptomatic controls. The PC adaption strategies of the thoracic and lumbar spine, the hip and the COP were measured in fifty-seven NSLBP patients and 22 asymptomatic controls. We tested three "feet together" conditions with increasing demands on PC strategies, using inertial measurement units (IMUs) on the spine and a Wii balance board for centre of pressure (COP) parameters. The differences between NSLBP patients and controls were most apparent when the participants were blindfolded, but remaining on a firm surface. While NSLBP patients had larger thoracic and lumbar spine mean absolute deviations of position (MADpos) in the frontal plane, the same parameters decreased in control subjects (relative change (RC): 0.23, 95% confidence interval: 0.03 to 0.45 and 0.03 to 0.48). The Mean absolute deviation of velocity (MADvel) of the thoracic spine in the frontal plane showed a similar and significant effect (RC: 0.12 95% CI: 0.01 to 0.25). Gender, age and pain during the measurements affected some parameters significantly. PC adaptions differ between NSLBP patients and asymptomatic controls. The differences are most apparent for the thoracic and lumbar parameters of MADpos, in the frontal plane and while the visual condition was removed. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. G4CEP: A G4 theory modification by including pseudopotential for molecules containing first-, second- and third-row representative elements.

    PubMed

    Silva, Cleuton de Souza; Pereira, Douglas Henrique; Custodio, Rogério

    2016-05-28

    The G4CEP composite method was developed from the respective G4 all-electron version by considering the implementation of compact effective pseudopotential (CEP). The G3/05 test set was used as reference to benchmark the adaptation by treating in this work atoms and compounds from the first and second periods of the periodic table, as well as representative elements of the third period, comprising 440 thermochemical data. G4CEP has not reached a so high level of accuracy as the G4 all-electron theory. G4CEP presented a mean absolute error around 1.09 kcal mol(-1), while the original method presents a deviation corresponding to 0.83 kcal mol(-1). The similarity of the optimized molecular geometries between G4 and G4CEP indicates that the core-electron effects and basis set adjustments may be pointed out as a significant factor responsible for the large discrepancies between the pseudopotential results and the experimental data, or even that the all-electron calculations are more efficient either in its formulation or in the cancellation of errors. When the G4CEP mean absolute error (1.09 kcal mol(-1)) is compared to 1.29 kcal mol(-1) from G3CEP, it does not seem so efficient. However, while the G3CEP uncertainty is ±4.06 kcal mol(-1), the G4CEP deviation is ±2.72 kcal mol(-1). Therefore, the G4CEP theory is considerably more reliable than any previous combination of composite theory and pseudopotential, particularly for enthalpies of formation and electron affinities.

  16. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford’s Law

    PubMed Central

    López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador

    2018-01-01

    Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333

  17. A simple method to relate microwave radiances to upper tropospheric humidity

    NASA Astrophysics Data System (ADS)

    Buehler, S. A.; John, V. O.

    2005-01-01

    A brightness temperature (BT) transformation method can be applied to microwave data to retrieve Jacobian weighted upper tropospheric relative humidity (UTH) in a broad layer centered roughly between 6 and 8 km altitude. The UTH bias is below 4% RH, and the relative UTH bias below 20%. The UTH standard deviation is between 2 and 6.5% RH in absolute numbers, or between 10 and 27% in relative numbers. The standard deviation is dominated by the regression noise, resulting from vertical structure not accounted for by the simple transformation relation. The UTH standard deviation due to radiometric noise alone has a relative standard deviation of approximately 7% for a radiometric noise level of 1 K. The retrieval performance was shown to be of almost constant quality for all viewing angles and latitudes, except for problems at high latitudes due to surface effects. A validation of AMSU UTH against radiosonde UTH shows reasonable agreement if known systematic differences between AMSU and radiosonde are taken into account. When the method is applied to supersaturation studies, regression noise and radiometric noise could lead to an apparent supersaturation even if there were no supersaturation. For a radiometer noise level of 1 K the drop-off slope of the apparent supersaturation is 0.17% RH-1, for a noise level of 2 K the slope is 0.12% RH-1. The main conclusion from this study is that the BT transformation method is very well suited for microwave data. Its particular strength is in climatological applications where the simplicity and the a priori independence are key advantages.

  18. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    NASA Astrophysics Data System (ADS)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  19. Absolute Quantification of Rifampicin by MALDI Imaging Mass Spectrometry Using Multiple TOF/TOF Events in a Single Laser Shot

    NASA Astrophysics Data System (ADS)

    Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.

    2017-01-01

    Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.

  20. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, You; Yin, Fang-Fang; Ren, Lei, E-mail: lei.ren@duke.edu

    2015-08-15

    Purpose: Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Methods: Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated onmore » the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the “gold-standard” on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔD{sub min}), maximum dose (ΔD{sub max}), and mean dose (ΔD{sub mean}), and the absolute deviations of prescription dose coverage (ΔV{sub 100%}) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Results: Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). Conclusions: MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.« less

  1. Intelligent Ensemble Forecasting System of Stock Market Fluctuations Based on Symetric and Asymetric Wavelet Functions

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim; Boukadoum, Mounir

    2015-08-01

    We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.

  2. Human sensitivity to vertical self-motion.

    PubMed

    Nesti, Alessandro; Barnett-Cowan, Michael; Macneilage, Paul R; Bülthoff, Heinrich H

    2014-01-01

    Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s(2). Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws' exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber's Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.

  3. Control of the interaction strength of photonic molecules by nanometer precise 3D fabrication.

    PubMed

    Rawlings, Colin D; Zientek, Michal; Spieser, Martin; Urbonas, Darius; Stöferle, Thilo; Mahrt, Rainer F; Lisunova, Yuliya; Brugger, Juergen; Duerig, Urs; Knoll, Armin W

    2017-11-28

    Applications for high resolution 3D profiles, so-called grayscale lithography, exist in diverse fields such as optics, nanofluidics and tribology. All of them require the fabrication of patterns with reliable absolute patterning depth independent of the substrate location and target materials. Here we present a complete patterning and pattern-transfer solution based on thermal scanning probe lithography (t-SPL) and dry etching. We demonstrate the fabrication of 3D profiles in silicon and silicon oxide with nanometer scale accuracy of absolute depth levels. An accuracy of less than 1nm standard deviation in t-SPL is achieved by providing an accurate physical model of the writing process to a model-based implementation of a closed-loop lithography process. For transfering the pattern to a target substrate we optimized the etch process and demonstrate linear amplification of grayscale patterns into silicon and silicon oxide with amplification ratios of ∼6 and ∼1, respectively. The performance of the entire process is demonstrated by manufacturing photonic molecules of desired interaction strength. Excellent agreement of fabricated and simulated structures has been achieved.

  4. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  5. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  6. Considerations and Changes in the Evaluation, Management, and Outcomes in the Management of Diverticular Disease: The Diagnosis, Pathology, and Treatment of Diverticular Colitis.

    PubMed

    Kucejko, Robert J; Poggio, Juan L

    2018-07-01

    Diverticular colitis, also known as segmental colitis associated with diverticulosis, is a colonic inflammatory disorder on the spectrum of inflammatory bowel disease (IBD). The disease consists of macroscopic and microscopic inflammation affecting inter-diverticular mucosa, sparing peri-diverticular mucosa, with inflammation confined to the descending and sigmoid colon. The disease likely arises from the altered immune response of an individual, genetically susceptible to the IBD spectrum of diseases. Patients with segmental colitis associated with diverticulosis (SCAD) are typically older, and likely represent a subgroup of IBD-susceptible patients who lacked an environmental trigger until that point in their life. Most patients remain in remission with initial treatments of mesalamine or topical steroids, and maintenance mesalamine afterwards. Only the most severe form of the disease necessitates immunomodulatory therapy and the consideration of surgery.

  7. Descriptive and geoenvironmental model for Co-Cu-Au deposits in metasedimentary rocks: Chapter G in Mineral deposit models for resource assessment

    USGS Publications Warehouse

    Slack, John F.; Johnson, Craig A.; Causey, J. Douglas; Lund, Karen; Schulz, Klaus J.; Gray, John E.; Eppinger, Robert G.; Slack, John F.

    2013-01-01

    Additional geologically and compositionally similar deposits are known, but have average Co grades less than 0.1 percent. Most of these deposits contain cobalt-rich pyrite and lack appreciable amounts of distinct Co sulfide and (or) sulfarsenide minerals. Such deposits are not discussed in detail in the following sections, but these deposits may be revelant to the descriptive and genetic models presented below. Examples include the Scadding Au-Co-Cu deposit in Ontario, Canada; the Vähäjoki Co-Cu-Au deposit in Finland; the Tuolugou Co-Au deposit in Qinghai Province, China; the Lala Co-Cu-UREE deposit in Sichuan Province, China; the Guelb Moghrein Cu-Au-Co deposit in Mauritania; and the Great Australia Co-Cu, Greenmount Cu-Au-Co, and Monakoff Cu-Au-Co-UAg deposits in Queensland, Australia. Detailed information on these deposits is presented in appendix 2.

  8. Clinical and neuropathological picture of ethylmalonic aciduria - diagnostic dilemma.

    PubMed

    Jamroz, Ewa; Paprocka, Justyna; Adamek, Dariusz; Pytel, Justyna; Szczechowska, Katarzyna; Grabska, Natalia; Malec, Michalina; Głuszkiewicz, Ewa; Daab, Michał; Wodołażski, Anatolij

    2011-01-01

    Increased ethylmalonic acid (EMA) in urine is a non-specific finding, and is observed in a number of inborn errors of metabolism, as well as in individuals who carry one of two common polymorphisms identified in the SCAD coding region. The authors present an 8-month-old girl with a suspicion of neuroinfection, although the clinical presentation led to diagnosis of ethylmalonic aciduria. From the neuropathological point of view the most remarkable changes were observed in the brain cortex, which was diffusely damaged practically in all regions of the brain. Of note, the most severe destruction was observed in the deepest regions of the sulci. The cortex of the affected regions showed no normal stratification and its structure was almost totally replaced by a form of "granulation tissue" with a markedly increased number of capillaries. To the authors' knowledge this is the first clinical report of ethylmalonic aciduria with brain autopsy findings.

  9. Relative populations of excited levels within the ground configuration of Si-like Cu, Zn, Ge and Se ions

    NASA Technical Reports Server (NTRS)

    Datla, R. U.; Roberts, J. R.; Bhatia, A. K.

    1991-01-01

    Populations of 3p2 1D2, 3P1, 3P2 levels in Si-like Cu, Zn, Ge, and Se ions have been deduced from the measurements of absolute intensities of magnetic dipole transitions within the 3s2 3p2 ground configuration. The measured population ratios are compared with theoretical calculations based on the distorted-wave approximation for the electron collisions and a semiclassical approximation for the proton collisions. The observed deviation from the statistical distribution for the excited-level populations within the ground configuration along the silicon isoelectronic sequence is in agreement with theoretical prediction.

  10. Corresponding states correlation for temperature dependent surface tension of normal saturated liquids

    NASA Astrophysics Data System (ADS)

    Yi, Huili; Tian, Jianxiang

    2014-07-01

    A new simple correlation based on the principle of corresponding state is proposed to estimate the temperature-dependent surface tension of normal saturated liquids. The correlation is a linear one and strongly stands for 41 saturated normal liquids. The new correlation requires only the triple point temperature, triple point surface tension and critical point temperature as input and is able to represent the experimental surface tension data for these 41 saturated normal liquids with a mean absolute average percent deviation of 1.26% in the temperature regions considered. For most substances, the temperature covers the range from the triple temperature to the one beyond the boiling temperature.

  11. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  12. Experimental Evidence for Quantum Tunneling Time.

    PubMed

    Camus, Nicolas; Yakaboylu, Enderalp; Fechner, Lutz; Klaiber, Michael; Laux, Martin; Mi, Yonghao; Hatsagortsyan, Karen Z; Pfeifer, Thomas; Keitel, Christoph H; Moshammer, Robert

    2017-07-14

    The first hundred attoseconds of the electron dynamics during strong field tunneling ionization are investigated. We quantify theoretically how the electron's classical trajectories in the continuum emerge from the tunneling process and test the results with those achieved in parallel from attoclock measurements. An especially high sensitivity on the tunneling barrier is accomplished here by comparing the momentum distributions of two atomic species of slightly deviating atomic potentials (argon and krypton) being ionized under absolutely identical conditions with near-infrared laser pulses (1300 nm). The agreement between experiment and theory provides clear evidence for a nonzero tunneling time delay and a nonvanishing longitudinal momentum of the electron at the "tunnel exit."

  13. Experimental Evidence for Quantum Tunneling Time

    NASA Astrophysics Data System (ADS)

    Camus, Nicolas; Yakaboylu, Enderalp; Fechner, Lutz; Klaiber, Michael; Laux, Martin; Mi, Yonghao; Hatsagortsyan, Karen Z.; Pfeifer, Thomas; Keitel, Christoph H.; Moshammer, Robert

    2017-07-01

    The first hundred attoseconds of the electron dynamics during strong field tunneling ionization are investigated. We quantify theoretically how the electron's classical trajectories in the continuum emerge from the tunneling process and test the results with those achieved in parallel from attoclock measurements. An especially high sensitivity on the tunneling barrier is accomplished here by comparing the momentum distributions of two atomic species of slightly deviating atomic potentials (argon and krypton) being ionized under absolutely identical conditions with near-infrared laser pulses (1300 nm). The agreement between experiment and theory provides clear evidence for a nonzero tunneling time delay and a nonvanishing longitudinal momentum of the electron at the "tunnel exit."

  14. Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion

    NASA Astrophysics Data System (ADS)

    Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar

    2014-06-01

    In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manaa, M. Riad; Fried, Laurence E.; Kuo, I-Feng W.

    We report gas-phase enthalpies of formation for the set of energetic molecules NTO, DADE, LLM-105, TNT, RDX, TATB, HMX, and PETN using the G2, G3, G4, and ccCA-PS3 quantum composite methods. Calculations for HMX and PETN hitherto represent the largest molecules attempted with these methods. G3 and G4 calculations are typically close to one another, with a larger difference found between these methods and ccCA-PS3. Furthermore there is significant uncertainty in experimental values, the mean absolute deviation between the average experimental value and calculations are 12, 6, 7, and 3 kcal/mol for G2, G3, G4, and ccCA-PS3, respectively.

  16. Heavy Ozone Enrichments from ATMOS Infrared Solar Spectra

    NASA Technical Reports Server (NTRS)

    Irion, F. W.; Gunson, M. R.; Rinsland, C. P.; Yung, Y. L.; Abrams, M. C.; Chang, A. Y.; Goldman, A.

    1996-01-01

    Vertical enrichment profiles of stratospheric O-16O-16O-18 and O-16O-18O-16 (hereafter referred to as (668)O3 and (686)O3 respectively) have been derived from space-based solar occultation spectra recorded at 0.01 cm(exp-1) resolution by the ATMOS (Atmospheric Trace MOlecule Spectroscopy) Fourier transform infrared (FTIR) spectrometer. The observations, made during the Spacelab 3 and ATLAS-1, -2, and -3 shuttle missions, cover polar, mid-latitude and tropical regions between 26 to 2.6 mb inclusive (approximately 25 to 41 km). Average enrichments, weighted by molecular (48)O3 density, of (15 +/- 6)% were found for (668)O3 and (10 +/- 7)% for (686)O3. Defining the mixing ratio of (50)O3 as the sum of those for (668)O3 and (686)O3, an enrichment of (13 plus or minus 5)% was found for (50)O3 (1 sigma standard deviation). No latitudinal or vertical gradients were found outside this standard deviation. From a series of ground-based measurements by the ATMOS instrument at Table Mountain, California (34.4 deg N), an average total column (668)O3 enrichment of (17 +/- 4)% (1 sigma standard deviation) was determined, with no significant seasonal variation discernable. Possible biases in the spectral intensities that affect the determination of absolute enrichments are discussed.

  17. Dynamic Modeling and Interactive Performance of PARM: A Parallel Upper-Limb Rehabilitation Robot Using Impedance Control for Patients after Stroke.

    PubMed

    Guang, Hui; Ji, Linhong; Shi, Yingying; Misgeld, Berno J E

    2018-01-01

    The robot-assisted therapy has been demonstrated to be effective in the improvements of limb function and even activities of daily living for patients after stroke. This paper presents an interactive upper-limb rehabilitation robot with a parallel mechanism and an isometric screen embedded in the platform to display trajectories. In the dynamic modeling for impedance control, the effects of friction and inertia are reduced by introducing the principle of virtual work and derivative of Jacobian matrix. To achieve the assist-as-needed impedance control for arbitrary trajectories, the strategy based on orthogonal deviations is proposed. Simulations and experiments were performed to validate the dynamic modeling and impedance control. Besides, to investigate the influence of the impedance in practice, a subject participated in experiments and performed two types of movements with the robot, that is, rectilinear and circular movements, under four conditions, that is, with/without resistance or impedance, respectively. The results showed that the impedance and resistance affected both mean absolute error and standard deviation of movements and also demonstrated the significant differences between movements with/without impedance and resistance ( p < 0.001). Furthermore, the error patterns were discussed, which suggested that the impedance environment was capable of alleviating movement deviations by compensating the synergetic inadequacy between the shoulder and elbow joints.

  18. Dynamic Modeling and Interactive Performance of PARM: A Parallel Upper-Limb Rehabilitation Robot Using Impedance Control for Patients after Stroke

    PubMed Central

    Shi, Yingying; Misgeld, Berno J. E.

    2018-01-01

    The robot-assisted therapy has been demonstrated to be effective in the improvements of limb function and even activities of daily living for patients after stroke. This paper presents an interactive upper-limb rehabilitation robot with a parallel mechanism and an isometric screen embedded in the platform to display trajectories. In the dynamic modeling for impedance control, the effects of friction and inertia are reduced by introducing the principle of virtual work and derivative of Jacobian matrix. To achieve the assist-as-needed impedance control for arbitrary trajectories, the strategy based on orthogonal deviations is proposed. Simulations and experiments were performed to validate the dynamic modeling and impedance control. Besides, to investigate the influence of the impedance in practice, a subject participated in experiments and performed two types of movements with the robot, that is, rectilinear and circular movements, under four conditions, that is, with/without resistance or impedance, respectively. The results showed that the impedance and resistance affected both mean absolute error and standard deviation of movements and also demonstrated the significant differences between movements with/without impedance and resistance (p < 0.001). Furthermore, the error patterns were discussed, which suggested that the impedance environment was capable of alleviating movement deviations by compensating the synergetic inadequacy between the shoulder and elbow joints. PMID:29850004

  19. Are Study and Journal Characteristics Reliable Indicators of "Truth" in Imaging Research?

    PubMed

    Frank, Robert A; McInnes, Matthew D F; Levine, Deborah; Kressel, Herbert Y; Jesurum, Julia S; Petrcich, William; McGrath, Trevor A; Bossuyt, Patrick M

    2018-04-01

    Purpose To evaluate whether journal-level variables (impact factor, cited half-life, and Standards for Reporting of Diagnostic Accuracy Studies [STARD] endorsement) and study-level variables (citation rate, timing of publication, and order of publication) are associated with the distance between primary study results and summary estimates from meta-analyses. Materials and Methods MEDLINE was searched for meta-analyses of imaging diagnostic accuracy studies, published from January 2005 to April 2016. Data on journal-level and primary-study variables were extracted for each meta-analysis. Primary studies were dichotomized by variable as first versus subsequent publication, publication before versus after STARD introduction, STARD endorsement, or by median split. The mean absolute deviation of primary study estimates from the corresponding summary estimates for sensitivity and specificity was compared between groups. Means and confidence intervals were obtained by using bootstrap resampling; P values were calculated by using a t test. Results Ninety-eight meta-analyses summarizing 1458 primary studies met the inclusion criteria. There was substantial variability, but no significant differences, in deviations from the summary estimate between paired groups (P > .0041 in all comparisons). The largest difference found was in mean deviation for sensitivity, which was observed for publication timing, where studies published first on a topic demonstrated a mean deviation that was 2.5 percentage points smaller than subsequently published studies (P = .005). For journal-level factors, the greatest difference found (1.8 percentage points; P = .088) was in mean deviation for sensitivity in journals with impact factors above the median compared with those below the median. Conclusion Journal- and study-level variables considered important when evaluating diagnostic accuracy information to guide clinical decisions are not systematically associated with distance from the truth; critical appraisal of individual articles is recommended. © RSNA, 2017 Online supplemental material is available for this article.

  20. Deployment and evaluation of a dual-sensor autofocusing method for on-machine measurement of patterns of small holes on freeform surfaces.

    PubMed

    Chen, Xiaomei; Longstaff, Andrew; Fletcher, Simon; Myers, Alan

    2014-04-01

    This paper presents and evaluates an active dual-sensor autofocusing system that combines an optical vision sensor and a tactile probe for autofocusing on arrays of small holes on freeform surfaces. The system has been tested on a two-axis test rig and then integrated onto a three-axis computer numerical control (CNC) milling machine, where the aim is to rapidly and controllably measure the hole position errors while the part is still on the machine. The principle of operation is for the tactile probe to locate the nominal positions of holes, and the optical vision sensor follows to focus and capture the images of the holes. The images are then processed to provide hole position measurement. In this paper, the autofocusing deviations are analyzed. First, the deviations caused by the geometric errors of the axes on which the dual-sensor unit is deployed are estimated to be 11 μm when deployed on a test rig and 7 μm on the CNC machine tool. Subsequently, the autofocusing deviations caused by the interaction of the tactile probe, surface, and small hole are mathematically analyzed and evaluated. The deviations are a result of the tactile probe radius, the curvatures at the positions where small holes are drilled on the freeform surface, and the effect of the position error of the hole on focusing. An example case study is provided for the measurement of a pattern of small holes on an elliptical cylinder on the two machines. The absolute sum of the autofocusing deviations is 118 μm on the test rig and 144 μm on the machine tool. This is much less than the 500 μm depth of field of the optical microscope. Therefore, the method is capable of capturing a group of clear images of the small holes on this workpiece for either implementation.

  1. Central X-ray beam correction of radiographic acetabular cup measurement after THA: an experimental study.

    PubMed

    Schwarz, T; Weber, M; Wörner, M; Renkawitz, T; Grifka, J; Craiovan, B

    2017-05-01

    Accurate assessment of cup orientation on postoperative radiographs is essential for evaluating outcome after THA. However, accuracy is impeded by the deviation of the central X-ray beam in relation to the cup and the impossibility of measuring retroversion on standard pelvic radiographs. In an experimental trial, we built an artificial cup holder enabling the setting of different angles of anatomical anteversion and inclination. Twelve different cup orientations were investigated by three examiners. After comparing the two methods for radiographic measurement of the cup position developed by Lewinnek and Widmer, we showed how to differentiate between anteversion and retroversion in each cup position by using a second plane. To show the effect of the central beam offset on the cup, we X-rayed a defined cup position using a multidirectional central beam offset. According to Murray's definition of anteversion and inclination, we created a novel corrective procedure to balance measurement errors caused by deviation of the central beam. Measurement of the 12 different cup positions with the Lewinnek's method yielded a mean deviation of [Formula: see text] (95 % CI 1.3-2.3) from the original cup anteversion. The respective deviation with the Widmer/Liaw's method was [Formula: see text] (95 % CI 2.4-4.0). In each case, retroversion could be differentiated from anteversion with a second radiograph. Because of the multidirectional central beam offset ([Formula: see text] cm) from the acetabular cup in the cup holder ([Formula: see text] anteversion and [Formula: see text] inclination), the mean absolute difference for anteversion was [Formula: see text] (range [Formula: see text] to [Formula: see text] and [Formula: see text] (range [Formula: see text] to [Formula: see text] for inclination. The application of our novel mathematical correction of the central beam offset reduced deviation to a mean difference of [Formula: see text] for anteversion and [Formula: see text] for inclination. This novel calculation for central beam offset correction enables highly accurate measurement of the cup position.

  2. Effect of Examiner Experience and Technique on the Alternate Cover Test

    PubMed Central

    Anderson, Heather A.; Manny, Ruth E.; Cotter, Susan A.; Mitchell, G. Lynn; Irani, Jasmine A.

    2013-01-01

    Purpose To compare the repeatability of the alternate cover test between experienced and inexperienced examiners and the effects of dissociation time and examiner bias. Methods Two sites each had an experienced examiner train 10 subjects (inexperienced examiners) to perform short and long dissociation time alternate cover test protocols at near. Each site conducted testing sessions with an examiner triad (experienced examiner and two inexperienced examiners) who were masked to each other’s results. Each triad performed the alternate cover test on 24 patients using both dissociation protocols. In an attempt to introduce bias, each of the paired inexperienced examiners was given a different graph of phoria distribution for the general population. Analysis techniques that adjust for correlations introduced when multiple measurements are obtained on the same patient were used to investigate the effect of examiner and dissociation time on each outcome. Results The range of measured deviations spanned 27.5 prism diopters (Δ) base-in to 17.5Δ base-out. The absolute mean difference between experienced and inexperienced examiners was 2.28 ± 2.4Δ and at least 60% of differences were ≤2Δ. Larger deviations were measured with the long dissociation protocol for both experienced and inexperienced examiners (mean difference range = 1.17 to 2.14Δ, p < 0.0001). The percentage of measured small deviations (2Δ base-out to 2Δ base-in) did not differ between inexperienced examiners biased with the narrow vs. wide theoretical distributions (p = 0.41). The magnitude and direction of the deviation had no effect on the size of the differences obtained with different examiners or dissociation times. Conclusions Although inexperienced examiners differed significantly from experienced examiners, most differences were <2Δ suggesting good reliability of inexperienced examiners’ measurements. Examiner bias did not have a substantial effect on inexperienced examiner measurements; however, increased dissociation resulted in larger measured deviations for all examiners. PMID:20125058

  3. Big data driven cycle time parallel prediction for production planning in wafer manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris

    2018-07-01

    Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.

  4. Size- and shape-dependent surface thermodynamic properties of nanocrystals

    NASA Astrophysics Data System (ADS)

    Fu, Qingshan; Xue, Yongqiang; Cui, Zixiang

    2018-05-01

    As the fundamental properties, the surface thermodynamic properties of nanocrystals play a key role in the physical and chemical changes. However, it remains ambiguous about the quantitative influence regularities of size and shape on the surface thermodynamic properties of nanocrystals. Thus by introducing interface variables into the Gibbs energy and combining Young-Laplace equation, relations between the surface thermodynamic properties (surface Gibbs energy, surface enthalpy, surface entropy, surface energy and surface heat capacity), respectively, and size of nanocrystals with different shapes were derived. Theoretical estimations of the orders of the surface thermodynamic properties of nanocrystals agree with available experimental values. Calculated results of the surface thermodynamic properties of Au, Bi and Al nanocrystals suggest that when r > 10 nm, the surface thermodynamic properties linearly vary with the reciprocal of particle size, and when r < 10 nm, the effect of particle size on the surface thermodynamic properties becomes greater and deviates from linear variation. For nanocrystals with identical equivalent diameter, the more the shape deviates from sphere, the larger the surface thermodynamic properties (absolute value) are.

  5. Detection of Epileptic Seizure Event and Onset Using EEG

    PubMed Central

    Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul

    2014-01-01

    This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892

  6. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    PubMed Central

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  7. Unveiling the Dependence of Glass Transitions on Mixing Thermodynamics in Miscible Systems

    NASA Astrophysics Data System (ADS)

    Tu, Wenkang; Wang, Yunxi; Li, Xin; Zhang, Peng; Tian, Yongjun; Jin, Shaohua; Wang, Li-Min

    2015-02-01

    The dependence of the glass transition in mixtures on mixing thermodynamics is examined by focusing on enthalpy of mixing, ΔHmix with the change in sign (positive vs. negative) and magnitude (small vs. large). The effects of positive and negative ΔHmix are demonstrated based on two isomeric systems of o- vs. m- methoxymethylbenzene (MMB) and o- vs. m- dibromobenzene (DBB) with comparably small absolute ΔHmix. Two opposite composition dependences of the glass transition temperature, Tg, are observed with the MMB mixtures showing a distinct negative deviation from the ideal mixing rule and the DBB mixtures having a marginally positive deviation. The system of 1, 2- propanediamine (12PDA) vs. propylene glycol (PG) with large and negative ΔHmix is compared with the systems of small ΔHmix, and a considerably positive Tg shift is seen. Models involving the properties of pure components such as Tg, glass transition heat capacity increment, ΔCp, and density, ρ, do not interpret the observed Tg shifts in the systems. In contrast, a linear correlation is revealed between ΔHmix and maximum Tg shifts.

  8. Point-based and model-based geolocation analysis of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet

    2017-01-01

    Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.

  9. Host model uncertainties in aerosol radiative forcing estimates: results from the AeroCom Prescribed intercomparison study

    NASA Astrophysics Data System (ADS)

    Stier, P.; Schutgens, N. A. J.; Bellouin, N.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Ma, X.; Myhre, G.; Penner, J. E.; Randles, C. A.; Samset, B.; Schulz, M.; Takemura, T.; Yu, F.; Yu, H.; Zhou, C.

    2013-03-01

    Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.47 Wm-2 and the inter-model standard deviation is 0.55 Wm-2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm-2, and the standard deviation increases to 1.01 W-2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45 Wm-2 (8%) clear-sky and 0.62 Wm-2 (11%) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.

  10. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  11. Development of artificial neural network models based on experimental data of response surface methodology to establish the nutritional requirements of digestible lysine, methionine, and threonine in broiler chicks.

    PubMed

    Mehri, M

    2012-12-01

    An artificial neural network (ANN) approach was used to develop feed-forward multilayer perceptron models to estimate the nutritional requirements of digestible lysine (dLys), methionine (dMet), and threonine (dThr) in broiler chicks. Sixty data lines representing response of the broiler chicks during 3 to 16 d of age to dietary levels of dLys (0.88-1.32%), dMet (0.42-0.58%), and dThr (0.53-0.87%) were obtained from literature and used to train the networks. The prediction values of ANN were compared with those of response surface methodology to evaluate the fitness of these 2 methods. The models were tested using R(2), mean absolute deviation, mean absolute percentage error, and absolute average deviation. The random search algorithm was used to optimize the developed ANN models to estimate the optimal values of dietary dLys, dMet, and dThr. The ANN models were used to assess the relative importance of each dietary input on the bird performance using sensitivity analysis. The statistical evaluations revealed the higher accuracy of ANN to predict the bird performance compared with response surface methodology models. The optimization results showed that the maximum BW gain may be obtained with dietary levels of 1.11, 0.51, and 0.78% of dLys, dMet, and dThr, respectively. Minimum feed conversion ratio may be achieved with dietary levels of 1.13, 0.54, 0.78% of dLys, dMet, and dThr, respectively. The sensitivity analysis on the models indicated that dietary Lys is the most important variable in the growth performance of the broiler chicks, followed by dietary Thr and Met. The results of this research revealed that the experimental data of a response-surface-methodology design could be successfully used to develop the well-designed ANN for pattern recognition of bird growth and optimization of nutritional requirements. The comparison between the 2 methods also showed that the statistical methods may have little effect on the ideal ratios of dMet and dThr to dLys in broiler chicks using multivariate optimization.

  12. Use of nonlinear models for describing scrotal circumference growth in Guzerat bulls raised under grazing conditions.

    PubMed

    Loaiza-Echeverri, A M; Bergmann, J A G; Toral, F L B; Osorio, J P; Carmo, A S; Mendonça, L F; Moustacas, V S; Henry, M

    2013-03-15

    The objective was to use various nonlinear models to describe scrotal circumference (SC) growth in Guzerat bulls on three farms in the state of Minas Gerais, Brazil. The nonlinear models were: Brody, Logistic, Gompertz, Richards, Von Bertalanffy, and Tanaka, where parameter A is the estimated testis size at maturity, B is the integration constant, k is a maturating index and, for the Richards and Tanaka models, m determines the inflection point. In Tanaka, A is an indefinite size of the testis, and B and k adjust the shape and inclination of the curve. A total of 7410 SC records were obtained every 3 months from 1034 bulls with ages varying between 2 and 69 months (<240 days of age = 159; 241-365 days = 451; 366-550 days = 1443; 551-730 days = 1705; and >731 days = 3652 SC measurements). Goodness of fit was evaluated by coefficients of determination (R(2)), error sum of squares, average prediction error (APE), and mean absolute deviation. The Richards model did not reach the convergence criterion. The R(2) were similar for all models (0.68-0.69). The error sum of squares was lowest for the Tanaka model. All models fit the SC data poorly in the early and late periods. Logistic was the model which best estimated SC in the early phase (based on APE and mean absolute deviation). The Tanaka and Logistic models had the lowest APE between 300 and 1600 days of age. The Logistic model was chosen for analysis of the environmental influence on parameters A and k. Based on absolute growth rate, SC increased from 0.019 cm/d, peaking at 0.025 cm/d between 318 and 435 days of age. Farm, year, and season of birth significantly affected size of adult SC and SC growth rate. An increase in SC adult size (parameter A) was accompanied by decreased SC growth rate (parameter k). In conclusion, SC growth in Guzerat bulls was characterized by an accelerated growth phase, followed by decreased growth; this was best represented by the Logistic model. The inflection point occurred at approximately 376 days of age (mean SC of 17.9 cm). We inferred that early selection of testicular size might result in smaller testes at maturity. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Laboratory validation of an in-home method for assessing circadian phase using dim light melatonin onset (DLMO).

    PubMed

    Pullman, Rebecca E; Roepke, Stephanie E; Duffy, Jeanne F

    2012-06-01

    To determine whether an accurate circadian phase assessment could be obtained from saliva samples collected by patients in their home. Twenty-four individuals with a complaint of sleep initiation or sleep maintenance difficulty were studied for two evenings. Each participant received instructions for collecting eight hourly saliva samples in dim light at home. On the following evening they spent 9h in a laboratory room with controlled dim (<20 lux) light where hourly saliva samples were collected. Circadian phase of dim light melatonin onset (DLMO) was determined using both an absolute threshold (3 pg ml(-1)) and a relative threshold (two standard deviations above the mean of three baseline values). Neither threshold method worked well for one participant who was a "low-secretor". In four cases the participants' in-lab melatonin levels rose much earlier or were much higher than their at-home levels, and one participant appeared to take the at home samples out of order. Overall, the at-home and in-lab DLMO values were significantly correlated using both methods, and differed on average by 37 (± 19)min using the absolute threshold and by 54 (± 36)min using the relative threshold. The at-home assessment procedure was able to determine an accurate DLMO using an absolute threshold in 62.5% of the participants. Thus, an at-home procedure for assessing circadian phase could be practical for evaluating patients for circadian rhythm sleep disorders. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Using operations research to plan improvement of the transport of critically ill patients.

    PubMed

    Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter

    2013-01-01

    Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.

  15. Upper mantle anisotropy from long-period P polarization

    NASA Astrophysics Data System (ADS)

    Schulte-Pelkum, Vera; Masters, Guy; Shearer, Peter M.

    2001-10-01

    We introduce a method to infer upper mantle azimuthal anisotropy from the polarization, i.e., the direction of particle motion, of teleseismic long-period P onsets. The horizontal polarization of the initial P particle motion can deviate by >10° from the great circle azimuth from station to source despite a high degree of linearity of motion. Recent global isotropic three-dimensional mantle models predict effects that are an order of magnitude smaller than our observations. Stations within regional distances of each other show consistent azimuthal deviation patterns, while the deviations seem to be independent of source depth and near-source structure. We demonstrate that despite this receiver-side spatial coherence, our polarization data cannot be fit by a large-scale joint inversion for whole mantle structure. However, they can be reproduced by azimuthal anisotropy in the upper mantle and crust. Modeling with an anisotropic reflectivity code provides bounds on the magnitude and depth range of the anisotropy manifested in our data. Our method senses anisotropy within one wavelength (250 km) under the receiver. We compare our inferred fast directions of anisotropy to those obtained from Pn travel times and SKS splitting. The results of the comparison are consistent with azimuthal anisotropy situated in the uppermost mantle, with SKS results deviating from Pn and Ppol in some regions with probable additional deeper anisotropy. Generally, our fast directions are consistent with anisotropic alignment due to lithospheric deformation in tectonically active regions and to absolute plate motion in shield areas. Our data provide valuable additional constraints in regions where discrepancies between results from different methods exist since the effect we observe is local rather than cumulative as in the case of travel time anisotropy and shear wave splitting. Additionally, our measurements allow us to identify stations with incorrectly oriented horizontal components.

  16. Design, development and clinical validation of computer-aided surgical simulation system for streamlined orthognathic surgical planning.

    PubMed

    Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M; Teichgraeber, John F; Gateno, Jaime; Xia, James J

    2017-12-01

    There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities.

  17. Preconditioning of Interplanetary Space Due to Transient CME Disturbances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temmer, M.; Reiss, M. A.; Hofmeister, S. J.

    Interplanetary space is characteristically structured mainly by high-speed solar wind streams emanating from coronal holes and transient disturbances such as coronal mass ejections (CMEs). While high-speed solar wind streams pose a continuous outflow, CMEs abruptly disrupt the rather steady structure, causing large deviations from the quiet solar wind conditions. For the first time, we give a quantification of the duration of disturbed conditions (preconditioning) for interplanetary space caused by CMEs. To this aim, we investigate the plasma speed component of the solar wind and the impact of in situ detected interplanetary CMEs (ICMEs), compared to different background solar wind modelsmore » (ESWF, WSA, persistence model) for the time range 2011–2015. We quantify in terms of standard error measures the deviations between modeled background solar wind speed and observed solar wind speed. Using the mean absolute error, we obtain an average deviation for quiet solar activity within a range of 75.1–83.1 km s{sup −1}. Compared to this baseline level, periods within the ICME interval showed an increase of 18%–32% above the expected background, and the period of two days after the ICME displayed an increase of 9%–24%. We obtain a total duration of enhanced deviations over about three and up to six days after the ICME start, which is much longer than the average duration of an ICME disturbance itself (∼1.3 days), concluding that interplanetary space needs ∼2–5 days to recover from the impact of ICMEs. The obtained results have strong implications for studying CME propagation behavior and also for space weather forecasting.« less

  18. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford's Law.

    PubMed

    Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador

    2018-05-09

    Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Design, development and clinical validation of computer-aided surgical simulation system for streamlined orthognathic surgical planning

    PubMed Central

    Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M.; Teichgraeber, John F.; Gateno, Jaime

    2017-01-01

    Purpose There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. Methods The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. Result When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. Conclusion We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities. PMID:28432489

  20. Uncertainty Analysis of Downscaled CMIP5 Precipitation Data for Louisiana, USA

    NASA Astrophysics Data System (ADS)

    Sumi, S. J.; Tamanna, M.; Chivoiu, B.; Habib, E. H.

    2014-12-01

    The downscaled CMIP3 and CMIP5 Climate and Hydrology Projections dataset contains fine spatial resolution translations of climate projections over the contiguous United States developed using two downscaling techniques (monthly Bias Correction Spatial Disaggregation (BCSD) and daily Bias Correction Constructed Analogs (BCCA)). The objective of this study is to assess the uncertainty of the CMIP5 downscaled general circulation models (GCM). We performed an analysis of the daily, monthly, seasonal and annual variability of precipitation downloaded from the Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections website for the state of Louisiana, USA at 0.125° x 0.125° resolution. A data set of daily gridded observations of precipitation of a rectangular boundary covering Louisiana is used to assess the validity of 21 downscaled GCMs for the 1950-1999 period. The following statistics are computed using the CMIP5 observed dataset with respect to the 21 models: the correlation coefficient, the bias, the normalized bias, the mean absolute error (MAE), the mean absolute percentage error (MAPE), and the root mean square error (RMSE). A measure of variability simulated by each model is computed as the ratio of its standard deviation, in both space and time, to the corresponding standard deviation of the observation. The correlation and MAPE statistics are also computed for each of the nine climate divisions of Louisiana. Some of the patterns that we observed are: 1) Average annual precipitation rate shows similar spatial distribution for all the models within a range of 3.27 to 4.75 mm/day from Northwest to Southeast. 2) Standard deviation of summer (JJA) precipitation (mm/day) for the models maintains lower value than the observation whereas they have similar spatial patterns and range of values in winter (NDJ). 3) Correlation coefficients of annual precipitation of models against observation have a range of -0.48 to 0.36 with variable spatial distribution by model. 4) Most of the models show negative correlation coefficients in summer and positive in winter. 5) MAE shows similar spatial distribution for all the models within a range of 5.20 to 7.43 mm/day from Northwest to Southeast of Louisiana. 6) Highest values of correlation coefficients are found at seasonal scale within a range of 0.36 to 0.46.

  1. An absolute cavity pyrgeometer to measure the absolute outdoor longwave irradiance with traceability to international system of units, SI

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Zeng, Jinan; Scheuch, Jonathan; Hanssen, Leonard; Wilthan, Boris; Myers, Daryl; Stoffel, Tom

    2012-03-01

    This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180° view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U95) of ±3.96 W m-2 with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m2 lower than that measured by the two pyrgeometers that are traceable to WISG, with a standard deviation of ±0.7 W m-2. These results suggest that the ACP design might be used for addressing the need to improve the international reference for broadband outdoor longwave irradiance measurements.

  2. Geometric Evaluation of the Effect of Prosthetic Rehabilitation on the Facial Appearance of Mandibulectomy Patients: A Preliminary Study.

    PubMed

    Aswehlee, Amel M; Elbashti, Mahmoud E; Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi

    The purpose of this study was to geometrically evaluate the effect of prosthetic rehabilitation on the facial appearance of mandibulectomy patients. Facial scans (with and without prostheses) were performed for 16 mandibulectomy patients using a noncontact three-dimensional (3D) digitizer, and 3D images were reconstructed with the corresponding software. The 3D datasets were geometrically evaluated and compared using 3D evaluation software. The mean difference in absolute 3D deviations for full face scans was 382.2 μm. This method may be useful in evaluating the effect of conventional prostheses on the facial appearance of individuals with mandibulectomy defects.

  3. [Gas chromatography in quantitative analysis of hydrocyanic acid and its salts in cadaveric blood].

    PubMed

    Iablochkin, V D

    2003-01-01

    A direct gas chromatography method was designed for the quantitative determination of cyanides (prussic acid) in cadaveric blood. Its sensitivity is 0.05 mg/ml. The routine volatile products, including substances, which emerge due to putrefaction of organic matters, do not affect the accuracy and reproducibility of the method; the exception is H-propanol that was used as the internal standard. The method was used in legal chemical expertise related with acute cyanide poisoning (suicide) as well as with poisoning of products of combustion of nonmetals (foam-rubber). The absolute error does not exceed 10% with a mean quadratic deviation of 0.0029-0.0033 mg.

  4. Predicting RNA Duplex Dimerization Free-Energy Changes upon Mutations Using Molecular Dynamics Simulations.

    PubMed

    Sakuraba, Shun; Asai, Kiyoshi; Kameda, Tomoshi

    2015-11-05

    The dimerization free energies of RNA-RNA duplexes are fundamental values that represent the structural stability of RNA complexes. We report a comparative analysis of RNA-RNA duplex dimerization free-energy changes upon mutations, estimated from a molecular dynamics simulation and experiments. A linear regression for nine pairs of double-stranded RNA sequences, six base pairs each, yielded a mean absolute deviation of 0.55 kcal/mol and an R(2) value of 0.97, indicating quantitative agreement between simulations and experimental data. The observed accuracy indicates that the molecular dynamics simulation with the current molecular force field is capable of estimating the thermodynamic properties of RNA molecules.

  5. The Frame of Fixed Stars in Relational Mechanics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael

    2017-01-01

    Relational mechanics is a gauge theory of classical mechanics whose laws do not govern the motion of individual particles but the evolution of the distances between particles. Its formulation gives a satisfactory answer to Leibniz's and Mach's criticisms of Newton's mechanics: relational mechanics does not rely on the idea of an absolute space. When describing the behavior of small subsystems with respect to the so called "fixed stars", relational mechanics basically agrees with Newtonian mechanics. However, those subsystems having huge angular momentum will deviate from the Newtonian behavior if they are described in the frame of fixed stars. Such subsystems naturally belong to the field of astronomy; they can be used to test the relational theory.

  6. Was the Big Bang hot?

    NASA Technical Reports Server (NTRS)

    Wright, E. L.

    1983-01-01

    Techniques for verifying the spectrum defined by Woody and Richards (WR, 1981), which serves as a base for dust-distorted models of the 3 K background, are discussed. WR detected a sharp deviation from the Planck curve in the 3 K background. The absolute intensity of the background may be determined by the frequency dependence of the dipole anisotropy of the background or the frequency dependence effect in galactic clusters. Both methods involve the Doppler shift; analytical formulae are defined for characterization of the dipole anisotropy. The measurement of the 30-300 GHz spectra of cold galactic dust may reveal the presence of significant amounts of needle-shaped grains, which would in turn support a theory of a cold Big Bang.

  7. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    PubMed

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund

    2015-09-01

    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree. (c) 2015 APA, all rights reserved.

  8. Comparison of design and torque measurements of various manual wrenches.

    PubMed

    Neugebauer, Jörg; Petermöller, Simone; Scheer, Martin; Happe, Arndt; Faber, Franz-Josef; Zoeller, Joachim E

    2015-01-01

    Accurate torque application and determination of the applied torque during surgical and prosthetic treatment is important to reduce complications. A study was performed to determine and compare the accuracy of manual wrenches, which are available in different designs with a large range of preset torques. Thirteen different wrench systems with a variety of preset torques ranging from 10 to 75 Ncm were evaluated. Three different designs were available, with a spring-in-coil or toggle design as an active mechanism or a beam as a passive mechanism, to select the preset torque. To provide a clinically relevant analysis, a total of 1,170 torque measurements in the range of 10 to 45 Ncm were made in vitro using an electronic torque measurement device. The absolute deviations in Ncm and percent deviations across all wrenches were small, with a mean of -0.24 ± 2.15 Ncm and -0.84% ± 11.72% as a shortfall relative to the preset value. The greatest overage was 8.2 Ncm (82.5%), and the greatest shortfall was 8.47 Ncm (46%). However, extreme values were rare, with 95th-percentile values of -1.5% (lower value) and -0.16% (upper value). A comparison with respect to wrench design revealed significantly higher deviations for coil and toggle-style wrenches than for beam wrenches. Beam wrenches were associated with a lower risk of rare extreme values thanks to their passive mechanism of achieving the selected preset torque, which minimizes the risk of harming screw connections.

  9. A three-dimensional evaluation of a laser scanner and a touch-probe scanner.

    PubMed

    Persson, Anna; Andersson, Matts; Oden, Agneta; Sandborgh-Englund, Gunilla

    2006-03-01

    The fit of a dental restoration depends on quality throughout the entire manufacturing process. There is difficulty in assessing the surface topography of an object with a complex form, such as teeth, since there is no exact reference form. The purpose of this study was to determine the repeatability and relative accuracy of 2 dental surface digitization devices. A computer-aided design (CAD) technique was used for evaluation to calculate and present the deviations 3-dimensionally. Ten dies of teeth prepared for complete crowns were fabricated in presintered yttria-stabilized tetragonal zirconia (Y-TZP). The surfaces were digitized 3 times each with an optical or mechanical digitizer. The number of points in the point clouds from each reading were calculated and used as the CAD reference model (CRM). Alignments were performed by registration software that works by minimizing a distance criterion. In color-difference maps, the distribution of the discrepancies between the surfaces in the CRM and the 3-dimensional surface models was identified and located. The repeatability of both scanners was within 10 microm, based on SD and absolute mean values. The qualitative evaluation resulted in an even distribution of the deviations in the optical digitizer, whereas the dominating part of the surfaces in the mechanical digitizer showed no deviations. The relative accuracy of the 2 surface digitization devices was within +/- 6 microm, based on median values. The repeatability of the optical digitizer was comparable with the mechanical digitization device, and the relative accuracy was similar.

  10. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  11. Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Mohamed Ismael, Hawa; Vandyck, George Kobina

    The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.

  12. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  13. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  14. SU-F-T-492: The Impact of Water Temperature On Absolute Dose Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Islam, N; Podgorsak, M; Roswell Park Cancer Institute, Buffalo, NY

    Purpose: The Task Group 51 (TG 51) protocol prescribes that dose calibration of photon beams be done by irradiating an ionization chamber in a water tank at pre-defined depths. Methodologies are provided to account for variations in measurement conditions by applying correction factors. However, the protocol does not completely account for the impact of water temperature. It is well established that water temperature will influence the density of air in the ion chamber collecting volume. Water temperature, however, will also influence the size of the collecting volume via thermal expansion of the cavity wall and the density of the watermore » in the tank. In this work the overall effect of water temperature on absolute dosimetry has been investigated. Methods: Dose measurements were made using a Farmer-type ion chamber for 6 and 23 MV photon beams with water temperatures ranging from 10 to 40°C. A reference ion chamber was used to account for fluctuations in beam output between successive measurements. Results: For the same beam output, the dose determined using TG 51 was dependent on the temperature of the water in the tank. A linear regression of the data suggests that the dependence is statistically significant with p-values of the slope equal to 0.003 and 0.01 for 6 and 23 MV beams, respectively. For a 10 degree increase in water phantom temperature, the absolute dose determined with TG 51 increased by 0.27% and 0.31% for 6 and 23 MV beams, respectively. Conclusion: There is a measurable effect of water temperature on absolute dose calibration. To account for this effect, a reference temperature can be defined and a correction factor applied to account for deviations from this reference temperature during beam calibration. Such a factor is expected to be of similar magnitude to most of the existing TG 51 correction factors.« less

  15. SU-F-T-330: Characterization of the Clinically Released ScandiDos Discover Diode Array for In-Vivo Dose Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Gutierrez, A

    Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less

  16. Observational characteristics of the tropopause inversion layer derived from CHAMP/GRACE radio occultations and MOZAIC aircraft data

    NASA Astrophysics Data System (ADS)

    Schmidt, Torsten; Cammas, Jean-Pierre; Heise, Stefan; Wickert, Jens; Haser, Antonia

    2010-05-01

    In this study we discuss characteristics of the tropopause inversion layer (TIL) based on two datasets. Temperature measurements from GPS radio occultation (RO) data (CHAMP and GRACE) for the time interval 2001-2009 are used to exhibit seasonal properties of the TIL on a global scale. In agreement with previous studies the vertical structure of the TIL is investigated using the square of the buoyancy frequency N. For the extratropics on both hemispheres N2 has an universal distribution independent from season: a local minimum about 2 km below the lapse rate tropopause height (LRTH), an absolute maximum about 1 km above the LRTH, and a local minimum about 4 km above the LRTH. In the tropics (15°N-15°S) the N2 maximum above the tropopause is 200-300 m higher compared with the extratropics and the local minimum of N2 below the tropopause appears about 4 km below the LRTH. Trace gas measurements onboard commercial aircrafts from 2001-2007 are used as a complementary dataset (MOZAIC program). We demonstrate that the mixing ratio gradients of ozone, carbon monoxide and water vapor are suitable parameters for characterizing the TIL reproducing most of the vertical structure of N2. We also show that the LRTH is strongly correlated with the absolute maxima of ozone and carbon monoxide mixing ratio gradients. Mean deviations of the heights of the absolute maxima of mixing ratio gradients from O3 and CO to the LRTH are (-0.02±1.51) km and (-0.35±1.28) km, respectively.

  17. Cluster-continuum quasichemical theory calculation of the lithium ion solvation in water, acetonitrile and dimethyl sulfoxide: an absolute single-ion solvation free energy scale.

    PubMed

    Carvalho, Nathalia F; Pliego, Josefredo R

    2015-10-28

    Absolute single-ion solvation free energy is a very useful property for understanding solution phase chemistry. The real solvation free energy of an ion depends on its interaction with the solvent molecules and on the net potential inside the solute cavity. The tetraphenyl arsonium-tetraphenyl borate (TATB) assumption as well as the cluster-continuum quasichemical theory (CC-QCT) approach for Li(+) solvation allows access to a solvation scale excluding the net potential. We have determined this free energy scale investigating the solvation of the lithium ion in water (H2O), acetonitrile (CH3CN) and dimethyl sulfoxide (DMSO) solvents via the CC-QCT approach. Our calculations at the MP2 and MP4 levels with basis sets up to the QZVPP+diff quality, and including solvation of the clusters and solvent molecules by the dielectric continuum SMD method, predict the solvation free energy of Li(+) as -116.1, -120.6 and -123.6 kcal mol(-1) in H2O, CH3CN and DMSO solvents, respectively (1 mol L(-1) standard state). These values are compatible with the solvation free energy of the proton of -253.4, -253.2 and -261.1 kcal mol(-1) in H2O, CH3CN and DMSO solvents, respectively. Deviations from the experimental TATB scale are only 1.3 kcal mol(-1) in H2O and 1.8 kcal mol(-1) in DMSO solvents. However, in the case of CH3CN, the deviation reaches a value of 9.2 kcal mol(-1). The present study suggests that the experimental TATB scale is inconsistent for CH3CN. A total of 125 values of the solvation free energy of ions in these three solvents were obtained. These new data should be useful for the development of theoretical solvation models.

  18. Assessment of ambient background concentrations of elements in soil using combined survey and open-source data.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2017-02-15

    Understanding ambient background concentrations in soil, at a local scale, is an essential part of environmental risk assessment. Where high resolution geochemical soil surveys have not been undertaken, soil data from alternative sources, such as environmental site assessment reports, can be used to support an understanding of ambient background conditions. Concentrations of metals/metalloids (As, Mn, Ni, Pb and Zn) were extracted from open-source environmental site assessment reports, for soils derived from the Newer Volcanics basalt, of Melbourne, Victoria, Australia. A manual screening method was applied to remove samples that were indicated to be contaminated by point sources and hence not representative of ambient background conditions. The manual screening approach was validated by comparison to data from a targeted background soil survey. Statistical methods for exclusion of contaminated samples from background soil datasets were compared to the manual screening method. The statistical methods tested included the Median plus Two Median Absolute Deviations, the upper whisker of a normal and log transformed Tukey boxplot, the point of inflection on a cumulative frequency plot and the 95th percentile. We have demonstrated that where anomalous sample results cannot be screened using site information, the Median plus Two Median Absolute Deviations is a conservative method for derivation of ambient background upper concentration limits (i.e. expected maximums). The upper whisker of a boxplot and the point of inflection on a cumulative frequency plot, were also considered adequate methods for deriving ambient background upper concentration limits, where the percentage of contaminated samples is <25%. Median ambient background concentrations of metals/metalloids in the Newer Volcanic soils of Melbourne were comparable to ambient background concentrations in Europe and the United States, except for Ni, which was naturally enriched in the basalt-derived soils of Melbourne. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Climate change indices for Greenland applied directly for other arctic regions - Enhanced and utilized climate information from one high resolution RCM downscaling for Greenland evaluated through pattern scaling and CMIP5

    NASA Astrophysics Data System (ADS)

    Olesen, M.; Christensen, J. H.; Boberg, F.

    2016-12-01

    Climate change indices for Greenland applied directly for other arctic regions - Enhanced and utilized climate information from one high resolution RCM downscaling for Greenland evaluated through pattern scaling and CMIP5Climate change affects the Greenlandic society both advantageously and disadvantageously. Changes in temperature and precipitation patterns may result in changes in a number of derived society related climate indices, such as the length of growing season or the number of annual dry days or a combination of the two - indices of substantial importance to society in a climate adaptation context.Detailed climate indices require high resolution downscaling. We have carried out a very high resolution (5 km) simulation with the regional climate model HIRHAM5, forced by the global model EC-Earth. Evaluation of RCM output is usually done with an ensemble of downscaled output with multiple RCM's and GCM's. Here we have introduced and tested a new technique; a translation of the robustness of an ensemble of GCM models from CMIP5 into the specific index from the HIRHAM5 downscaling through a correlation between absolute temperatures and its corresponding index values from the HIRHAM5 output.The procedure is basically conducted in two steps: First, the correlation between temperature and a given index for the HIRHAM5 simulation by a best fit to a second order polynomial is identified. Second, the standard deviation from the CMIP5 simulations is introduced to show the corresponding standard deviation of the index from the HIRHAM5 run. The change of specific climate indices due to global warming will then be possible to evaluate elsewhere corresponding to the change in absolute temperature.Results based on selected indices with focus on the future climate in Greenland calculated for the rcp4.5 and rcp8.5 scenarios will be presented.

  20. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  1. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  2. Redundancy in Glucose Sensing

    PubMed Central

    Sharifi, Amin; Varsavsky, Andrea; Ulloa, Johanna; Horsburgh, Jodie C.; McAuley, Sybil A.; Krishnamurthy, Balasubramanian; Jenkins, Alicia J.; Colman, Peter G.; Ward, Glenn M.; MacIsaac, Richard J.; Shah, Rajiv; O’Neal, David N.

    2015-01-01

    Background: Current electrochemical glucose sensors use a single electrode. Multiple electrodes (redundancy) may enhance sensor performance. We evaluated an electrochemical redundant sensor (ERS) incorporating two working electrodes (WE1 and WE2) onto a single subcutaneous insertion platform with a processing algorithm providing a single real-time continuous glucose measure. Methods: Twenty-three adults with type 1 diabetes each wore two ERSs concurrently for 168 hours. Post-insertion a frequent sampling test (FST) was performed with ERS benchmarked against a glucose meter (Bayer Contour Link). Day 4 and 7 FSTs were performed with a standard meal and venous blood collected for reference glucose measurements (YSI and meter). Between visits, ERS was worn with capillary blood glucose testing ≥8 times/day. Sensor glucose data were processed prospectively. Results: Mean absolute relative deviation (MARD) for ERS day 1-7 (3,297 paired points with glucose meter) was (mean [SD]) 10.1 [11.5]% versus 11.4 [11.9]% for WE1 and 12.0 [11.9]% for WE2; P < .0001. ERS Clarke A and A+B were 90.2% and 99.8%, respectively. ERS day 4 plus day 7 MARD (1,237 pairs with YSI) was 9.4 [9.5]% versus 9.6 [9.7]% for WE1 and 9.9 [9.7]% for WE2; P = ns. ERS day 1-7 precision absolute relative deviation (PARD) was 9.9 [3.6]% versus 11.5 [6.2]% for WE1 and 10.1 [4.4]% for WE2; P = ns. ERS sensor display time was 97.8 [6.0]% versus 91.0 [22.3]% for WE1 and 94.1 [14.3]% for WE2; P < .05. Conclusions: Electrochemical redundancy enhances glucose sensor accuracy and display time compared with each individual sensing element alone. ERS performance compares favorably with ‘best-in-class’ of non-redundant sensors. PMID:26499476

  3. Predicting Accommodative Response Using Paraxial Schematic Eye Models

    PubMed Central

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2016-01-01

    Purpose Prior ultrasound biomicroscopy (UBM) studies showed that accommodative optical response (AOR) can be predicted from accommodative biometric changes in a young and a pre-presbyopic population from linear relationships between accommodative optical and biometric changes, with a standard deviation of less than 0.55D. Here, paraxial schematic eyes (SE) were constructed from measured accommodative ocular biometry parameters to see if predictions are improved. Methods Measured ocular biometry (OCT, A-scan and UBM) parameters from 24 young and 24 pre-presbyopic subjects were used to construct paraxial SEs for each individual subject (individual SEs) for three different lens equivalent refractive index methods. Refraction and AOR calculated from the individual SEs were compared with Grand Seiko (GS) autorefractor measured refraction and AOR. Refraction and AOR were also calculated from individual SEs constructed using the average population accommodative change in UBM measured parameters (average SEs). Results Schematic eye calculated and GS measured AOR were linearly related (young subjects: slope = 0.77; r2 = 0.86; pre-presbyopic subjects: slope = 0.64; r2 = 0.55). The mean difference in AOR (GS - individual SEs) for the young subjects was −0.27D and for the pre-presbyopic subjects was 0.33D. For individual SEs, the mean ± SD of the absolute differences in AOR between the GS and SEs was 0.50 ± 0.39D for the young subjects and 0.50 ± 0.37D for the pre-presbyopic subjects. For average SEs, the mean ± SD of the absolute differences in AOR between the GS and the SEs was 0.77 ± 0.88D for the young subjects and 0.51 ± 0.49D for the pre-presbyopic subjects. Conclusions Individual paraxial SEs predict AOR, on average, with a standard deviation of 0.50D in young and pre-presbyopic subject populations. Although this prediction is only marginally better than from individual linear regressions, it does consider all the ocular biometric parameters. PMID:27092928

  4. Health status convergence at the local level: empirical evidence from Austria

    PubMed Central

    2011-01-01

    Introduction Health is an important dimension of welfare comparisons across individuals, regions and states. Particularly from a long-term perspective, within-country convergence of the health status has rarely been investigated by applying methods well established in other scientific fields. In the following paper we study the relation between initial levels of the health status and its improvement at the local community level in Austria in the time period 1969-2004. Methods We use age standardized mortality rates from 2381 Austrian communities as an indicator for the health status and analyze the convergence/divergence of overall mortality for (i) the whole population, (ii) females, (iii) males and (iv) the gender mortality gap. Convergence/Divergence is studied by applying different concepts of cross-regional inequality (weighted standard deviation, coefficient of variation, Theil-Coefficient of inequality). Various econometric techniques (weighted OLS, Quantile Regression, Kendall's Rank Concordance) are used to test for absolute and conditional beta-convergence in mortality. Results Regarding sigma-convergence, we find rather mixed results. While the weighted standard deviation indicates an increase in equality for all four variables, the picture appears less clear when correcting for the decreasing mean in the distribution. However, we find highly significant coefficients for absolute and conditional beta-convergence between the periods. While these results are confirmed by several robustness tests, we also find evidence for the existence of convergence clubs. Conclusions The highly significant beta-convergence across communities might be caused by (i) the efforts to harmonize and centralize the health policy at the federal level in Austria since the 1970s, (ii) the diminishing returns of the input factors in the health production function, which might lead to convergence, as the general conditions (e.g. income, education etc.) improve over time, and (iii) the mobility of people across regions, as people tend to move to regions/communities which exhibit more favorable living conditions. JEL classification: I10, I12, I18 PMID:21864364

  5. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    PubMed

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  6. Deciding Optimal Noise Monitoring Sites with Matrix Gray Absolute Relation Degree Theory

    NASA Astrophysics Data System (ADS)

    Gao, Zhihua; Li, Yadan; Zhao, Limin; Wang, Shuangwei

    2015-08-01

    Noise maps are applied to assess noise level in cities all around the world. There are mainly two ways of producing noise maps: one way is producing noise maps through theoretical simulations with the surrounding conditions, such as traffic flow, building distribution, etc.; the other one is calculating noise level with actual measurement data from noise monitors. Currently literature mainly focuses on considering more factors that affect sound traveling during theoretical simulations and interpolation methods in producing noise maps based on measurements of noise. Although many factors were considered during simulation, noise maps have to be calibrated by actual noise measurements. Therefore, the way of obtaining noise data is significant to both producing and calibrating a noise map. However, there is little literature mentioned about rules of deciding the right monitoring sites when placed the specified number of noise sensors and given the deviation of a noise map produced with data from them. In this work, by utilizing matrix Gray Absolute Relation Degree Theory, we calculated the relation degrees between the most precise noise surface and those interpolated with different combinations of noise data with specified number. We found that surfaces plotted with different combinations of noise data produced different relation degrees with the most precise one. Then we decided the least significant one among the total and calculated the corresponding deviation when it was excluded in making a noise surface. Processing the left noise data in the same way, we found out the least significant datum among the left data one by one. With this method, we optimized the noise sensor’s distribution in an area about 2km2. And we also calculated the bias of surfaces with the least significant data removed. Our practice provides an optimistic solution to the situation faced by most governments that there is limited financial budget available for noise monitoring, especially in the undeveloped regions.

  7. Modeling the gas-phase thermochemistry of organosulfur compounds.

    PubMed

    Vandeputte, Aäron G; Sabbe, Maarten K; Reyniers, Marie-Françoise; Marin, Guy B

    2011-06-27

    Key to understanding the involvement of organosulfur compounds in a variety of radical chemistries, such as atmospheric chemistry, polymerization, pyrolysis, and so forth, is knowledge of their thermochemical properties. For organosulfur compounds and radicals, thermochemical data are, however, much less well documented than for hydrocarbons. The traditional recourse to the Benson group additivity method offers no solace since only a very limited number of group additivity values (GAVs) is available. In this work, CBS-QB3 calculations augmented with 1D hindered rotor corrections for 122 organosulfur compounds and 45 organosulfur radicals were used to derive 93 Benson group additivity values, 18 ring-strain corrections, 2 non-nearest-neighbor interactions, and 3 resonance corrections for standard enthalpies of formation, standard molar entropies, and heat capacities for organosulfur compounds and organosulfur radicals. The reported GAVs are consistent with previously reported GAVs for hydrocarbons and hydrocarbon radicals and include 77 contributions, among which 26 radical contributions, which, to the best of our knowledge, have not been reported before. The GAVs allow one to estimate the standard enthalpies of formation at 298 K, the standard entropies at 298 K, and standard heat capacities in the temperature range 300-1500 K for a large set of organosulfur compounds, that is, thiols, thioketons, polysulfides, alkylsulfides, thials, dithioates, and cyclic sulfur compounds. For a validation set of 26 organosulfur compounds, the mean absolute deviation between experimental and group additively modeled enthalpies of formation amounts to 1.9  kJ  mol(-1). For an additional set of 14 organosulfur compounds, it was shown that the mean absolute deviations between calculated and group additively modeled standard entropies and heat capacities are restricted to 4 and 2 J  mol(-1)  K(-1), respectively. As an alternative to Benson GAVs, 26 new hydrogen-bond increments are reported, which can also be useful for the prediction of radical thermochemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Extensive TD-DFT Benchmark: Singlet-Excited States of Organic Molecules.

    PubMed

    Jacquemin, Denis; Wathelet, Valérie; Perpète, Eric A; Adamo, Carlo

    2009-09-08

    Extensive Time-Dependent Density Functional Theory (TD-DFT) calculations have been carried out in order to obtain a statistically meaningful analysis of the merits of a large number of functionals. To reach this goal, a very extended set of molecules (∼500 compounds, >700 excited states) covering a broad range of (bio)organic molecules and dyes have been investigated. Likewise, 29 functionals including LDA, GGA, meta-GGA, global hybrids, and long-range-corrected hybrids have been considered. Comparisons with both theoretical references and experimental measurements have been carried out. On average, the functionals providing the best match with reference data are, one the one hand, global hybrids containing between 22% and 25% of exact exchange (X3LYP, B98, PBE0, and mPW1PW91) and, on the other hand, a long-range-corrected hybrid with a less-rapidly increasing HF ratio, namely LC-ωPBE(20). Pure functionals tend to be less consistent, whereas functionals incorporating a larger fraction of exact exchange tend to underestimate significantly the transition energies. For most treated cases, the M05 and CAM-B3LYP schemes deliver fairly small deviations but do not outperform standard hybrids such as X3LYP or PBE0, at least within the vertical approximation. With the optimal functionals, one obtains mean absolute deviations smaller than 0.25 eV, though the errors significantly depend on the subset of molecules or states considered. As an illustration, PBE0 and LC-ωPBE(20) provide a mean absolute error of only 0.14 eV for the 228 states related to neutral organic dyes but are completely off target for cyanine-like derivatives. On the basis of comparisons with theoretical estimates, it also turned out that CC2 and TD-DFT errors are of the same order of magnitude, once the above-mentioned hybrids are selected.

  9. SU-F-J-166: Volumetric Spatial Distortions Comparison for 1.5 Tesla Versus 3 Tesla MRI for Gamma Knife Radiosurgery Scans Using Frame Marker Fusion and Co-Registration Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neyman, G

    Purpose: To compare typical volumetric spatial distortions for 1.5 Tesla versus 3 Tesla MRI Gamma Knife radiosurgery scans in the frame marker fusion and co-registration frame-less modes. Methods: Quasar phantom by Modus Medical Devices Inc. with GRID image distortion software was used for measurements of volumetric distortions. 3D volumetric T1 weighted scans of the phantom were produced on 1.5 T Avanto and 3 T Skyra MRI Siemens scanners. The analysis was done two ways: for scans with localizer markers from the Leksell frame and relatively to the phantom only (simulated co-registration technique). The phantom grid contained a total of 2002more » vertices or control points that were used in the assessment of volumetric geometric distortion for all scans. Results: Volumetric mean absolute spatial deviations relatively to the frame localizer markers for 1.5 and 3 Tesla machine were: 1.39 ± 0.15 and 1.63 ± 0.28 mm with max errors of 1.86 and 2.65 mm correspondingly. Mean 2D errors from the Gamma Plan were 0.3 and 1.0 mm. For simulated co-registration technique the volumetric mean absolute spatial deviations relatively to the phantom for 1.5 and 3 Tesla machine were: 0.36 ± 0.08 and 0.62 ± 0.13 mm with max errors of 0.57 and 1.22 mm correspondingly. Conclusion: Volumetric spatial distortions are lower for 1.5 Tesla versus 3 Tesla MRI machines localized with markers on frames and significantly lower for co-registration techniques with no frame localization. The results show the advantage of using co-registration technique for minimizing MRI volumetric spatial distortions which can be especially important for steep dose gradient fields typically used in Gamma Knife radiosurgery. Consultant for Elekta AB.« less

  10. Redundancy in Glucose Sensing: Enhanced Accuracy and Reliability of an Electrochemical Redundant Sensor for Continuous Glucose Monitoring.

    PubMed

    Sharifi, Amin; Varsavsky, Andrea; Ulloa, Johanna; Horsburgh, Jodie C; McAuley, Sybil A; Krishnamurthy, Balasubramanian; Jenkins, Alicia J; Colman, Peter G; Ward, Glenn M; MacIsaac, Richard J; Shah, Rajiv; O'Neal, David N

    2016-05-01

    Current electrochemical glucose sensors use a single electrode. Multiple electrodes (redundancy) may enhance sensor performance. We evaluated an electrochemical redundant sensor (ERS) incorporating two working electrodes (WE1 and WE2) onto a single subcutaneous insertion platform with a processing algorithm providing a single real-time continuous glucose measure. Twenty-three adults with type 1 diabetes each wore two ERSs concurrently for 168 hours. Post-insertion a frequent sampling test (FST) was performed with ERS benchmarked against a glucose meter (Bayer Contour Link). Day 4 and 7 FSTs were performed with a standard meal and venous blood collected for reference glucose measurements (YSI and meter). Between visits, ERS was worn with capillary blood glucose testing ≥8 times/day. Sensor glucose data were processed prospectively. Mean absolute relative deviation (MARD) for ERS day 1-7 (3,297 paired points with glucose meter) was (mean [SD]) 10.1 [11.5]% versus 11.4 [11.9]% for WE1 and 12.0 [11.9]% for WE2; P < .0001. ERS Clarke A and A+B were 90.2% and 99.8%, respectively. ERS day 4 plus day 7 MARD (1,237 pairs with YSI) was 9.4 [9.5]% versus 9.6 [9.7]% for WE1 and 9.9 [9.7]% for WE2; P = ns. ERS day 1-7 precision absolute relative deviation (PARD) was 9.9 [3.6]% versus 11.5 [6.2]% for WE1 and 10.1 [4.4]% for WE2; P = ns. ERS sensor display time was 97.8 [6.0]% versus 91.0 [22.3]% for WE1 and 94.1 [14.3]% for WE2; P < .05. Electrochemical redundancy enhances glucose sensor accuracy and display time compared with each individual sensing element alone. ERS performance compares favorably with 'best-in-class' of non-redundant sensors. © 2015 Diabetes Technology Society.

  11. Can we go From Tomographically Determined Seismic Velocities to Composition? Amplitude Resolution Issues in Local Earthquake Tomography

    NASA Astrophysics Data System (ADS)

    Wagner, L.

    2007-12-01

    There have been a number of recent papers (i.e. Lee (2003), James et al. (2004), Hacker and Abers (2004), Schutt and Lesher (2006)) which calculate predicted velocities for xenolith compositions at mantle pressures and temperatures. It is tempting, therefore, to attempt to go the other way ... to use tomographically determined absolute velocities to constrain mantle composition. However, in order to do this, it is vital that one is able to accurately constrain not only the polarity of the determined velocity deviations (i.e. fast vs slow) but also how much faster, how much slower relative to the starting model, if absolute velocities are to be so closely analyzed. While much attention has been given to issues concerning spatial resolution in seismic tomography (i.e. what areas are fast, what areas are slow), little attention has been directed at the issue of amplitude resolution (how fast, how slow). Velocity deviation amplitudes in seismic tomography are heavily influenced by the amount of regularization used and the number of iterations performed. Determining these two parameters is a difficult and little discussed problem. I explore the effect of these two parameters on the amplitudes obtained from the tomographic inversion of the Chile Argentina Geophysical Experiment (CHARGE) dataset, and attempt to determine a reasonable solution space for the low Vp, high Vs, low Vp/Vs anomaly found above the flat slab in central Chile. I then compare this solution space to the range in experimentally determined velocities for peridotite end-members to evaluate our ability to constrain composition using tomographically determined seismic velocities. I find that in general, it will be difficult to constrain the compositions of normal mantle peridotites using tomographically determined velocities, but that in the unusual case of the anomaly above the flat slab, the observed velocity structure still has an anomalously high S wave velocity and low Vp/Vs ratio that is most consistent with enstatite, but inconsistent with the predicted velocities of known mantle xenoliths.

  12. A Preliminary Analysis on Empirical Attenuation of Absolute Velocity Response Spectra (1 to 10s) in Japan

    NASA Astrophysics Data System (ADS)

    Dhakal, Y. P.; Kunugi, T.; Suzuki, W.; Aoi, S.

    2013-12-01

    The Mw 9.1 Tohoku-oki earthquake caused strong shakings of super high rise and high rise buildings constructed on deep sedimentary basins in Japan. Many people felt difficulty in moving inside the high rise buildings even on the Osaka basin located at distances as far as 800 km from the epicentral area. Several empirical equations are proposed to estimate the peak ground motions and absolute acceleration response spectra applicable mainly within 300 to 500km from the source area. On the other hand, Japan Meteorological Agency has recently proposed four classes of absolute velocity response spectra as suitable indices to qualitatively describe the intensity of long-period ground motions based on the observed earthquake records, human experiences, and actual damages that occurred in the high rise and super high rise buildings. The empirical prediction equations have been used in disaster mitigation planning as well as earthquake early warning. In this study, we discuss the results of our preliminary analysis on attenuation relation of absolute velocity response spectra calculated from the observed strong motion records including those from the Mw 9.1 Tohoku-oki earthquake using simple regression models with various model parameters. We used earthquakes, having Mw 6.5 or greater, and focal depths shallower than 50km, which occurred in and around Japanese archipelago. We selected those earthquakes for which the good quality records are available over 50 observation sites combined from K-NET and KiK-net. After a visual inspection on approximately 21,000 three component records from 36 earthquakes, we used about 15,000 good quality records in the period range of 1 to 10s within the hypocentral distance (R) of 800km. We performed regression analyses assuming the following five regression models. (1) log10Y (T) = c+ aMw - log10R - bR (2) log10Y (T) = c+ aMw - log10R - bR +gS (3) log10Y (T) = c+ aMw - log10R - bR + hD (4) log10Y (T) = c+ aMw - log10R - bR +gS +hD (5) log10Y (T) = c+ aMw - log10R - bR +∑gS +hD where Y (T) is the 5% damped peak vector response in cm/s derived from two horizontal component records for a natural period T in second; in (2) S is a dummy variable which is one if a site is located inside a sedimentary basin, otherwise zero. In (3), D is depth to the top of layer having a particular S-wave velocity. We used the deep underground S-wave velocity model available from Japan Seismic Hazard Information Station (J-SHIS). In (5), sites are classified to various sedimentary basins. Analyses show that the standard deviations decrease in the order of the models listed and the all coefficients are significant. Interestingly, coefficients g are found to be different from basin to basin at most periods, and the depth to the top of layer having S-wave velocity of 1.7km/s gives the smallest standard deviation of 0.31 at T=4.4s in (5). This study shows the possibility of describing the observed peak absolute velocity response values by using simple model parameters like site location and sedimentary depth soon after the location and magnitude of an earthquake are known.

  13. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  14. Analysis of iodinated haloacetic acids in drinking water by reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry with large volume direct aqueous injection.

    PubMed

    Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L

    2012-07-06

    A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Vertex Movement for Mission Status Graphics: A Polar-Star Display

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna

    2002-01-01

    Humans are traditionally bad monitors, especially over long periods of time on reliable systems, and they are being called upon to do this more and more as systems become further automated. Because of this, there is a need to find a way to display the monitoring information to the human operator in such a way that he can notice pertinent deviations in a timely manner. One possible solution is to use polar-star displays that will show deviations from normal in a more salient manner. A polar-star display uses a polygon's vertices to report values. An important question arises, though, of how the vertices should move. This experiment investigated two particular issues of how the vertices should move: (1) whether the movement of the vertices should be continuous or discrete and (2) whether the parameters that made up each vertex should always move in one direction regardless of parameter sign or move in both directions indicating parameter sign. The results indicate that relative movement direction is best. Subjects performed better with this movement type and they subjectively preferred it to the absolute movement direction. As for movement type, no strong preferences were shown.

  16. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon

    2017-05-01

    A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.

  17. Multicentre dose audit for clinical trials of radiation therapy in Asia

    PubMed Central

    Fukuda, Shigekazu; Fukumura, Akifumi; Nakamura, Yuzuru-Kutsutani; Jianping, Cao; Cho, Chul-Koo; Supriana, Nana; Dung, To Anh; Calaguas, Miriam Joy; Devi, C.R. Beena; Chansilpa, Yaowalak; Banu, Parvin Akhter; Riaz, Masooma; Esentayeva, Surya; Kato, Shingo; Karasawa, Kumiko; Tsujii, Hirohiko

    2017-01-01

    Abstract A dose audit of 16 facilities in 11 countries has been performed within the framework of the Forum for Nuclear Cooperation in Asia (FNCA) quality assurance program. The quality of radiation dosimetry varies because of the large variation in radiation therapy among the participating countries. One of the most important aspects of international multicentre clinical trials is uniformity of absolute dose between centres. The National Institute of Radiological Sciences (NIRS) in Japan has conducted a dose audit of participating countries since 2006 by using radiophotoluminescent glass dosimeters (RGDs). RGDs have been successfully applied to a domestic postal dose audit in Japan. The authors used the same audit system to perform a dose audit of the FNCA countries. The average and standard deviation of the relative deviation between the measured and intended dose among 46 beams was 0.4% and 1.5% (k = 1), respectively. This is an excellent level of uniformity for the multicountry data. However, of the 46 beams measured, a single beam exceeded the permitted tolerance level of ±5%. We investigated the cause for this and solved the problem. This event highlights the importance of external audits in radiation therapy. PMID:27864507

  18. Accuracy of intraoral data acquisition in comparison to the conventional impression.

    PubMed

    Luthardt, R G; Loos, R; Quaas, S

    2005-10-01

    The achievable accuracy is a decisive parameter for the comparison of direct intraoral digitization with the conventional impression. The objective of the study was therefore to compare the accuracy of the reproduction of a model situation by intraoral digitization vs. the conventional procedure consisting of impression taking, model production, and extraoral digitization. Proceeding from a die model with a prepared tooth 16, the reference data set of the teeth 15, 16 and 17 was produced with an established procedure by means ofextraoral digitization. For the simulated intraoral data acquisition of the master model (Cerec 3D camera, Sirona, Bensheim), the camera was fastened on a stand for the measurement and the teeth digitized seven times each in defined views (occlusal, and in each case inclined by 20 degrees, from the mesio-proximal, disto-proximal, vestibular and oral aspect). Matching was automated (comparative data sets B1-B5). A clinically perfect one-step putty-and-wash impression was taken from the starting model. The model produced under defined conditions was digitized extraorally five times (digi-SCAN, comparative data sets C1-C5). The data sets B1-B5 and C1-C5 were assigned to the reference data set by means of best-fit matching and the root of the mean quadratic deviation (RMS; root mean square) calculated. The deviations were visualized, and mean positive, negative and absolute deviations calculated. The mean RMS was 27.9 microm (B1-B5) or 18.8 microm (C1-C5). The mean deviations for the prepared tooth were 18 microm/-17 microm (B1-B5) and 9 microm /-9 microm (C1-C5). For tooth 15, the mean deviations were 22 microm/-19 microm (B1-B5) and 15 microm/-16 microm (C1-C5). The intraoral method showed good results with deviations from the CAD starting model of approx. 17 microm, related to the prepared tooth 16. On the whole, in this in-vitro study, extraoral digitization with impression taking and model production showed higher accuracy than intraoral digitization. Since the inaccuracies in the conventional impression under real clinical conditions may be higher than the values determined above, a comparison under clinical conditions should be performed subsequently.

  19. Concentrations of biogenic amines in fish, squid and octopus and their changes during storage.

    PubMed

    Hu, Yue; Huang, Zhiyong; Li, Jian; Yang, Hong

    2012-12-15

    The concentrations of seven biogenic amines (BA) were simultaneously determined in 74 samples of fish, squid and octopus, by the method of HPLC coupled with pre-column derivatisation. The relationship between the formation of BA in aquatic products and the growth of microbial flora during storage was also investigated. Results showed that putrescine, cadaverine, histamine and tyramine were the dominant BA in the studied samples, but the concentrations of histamine and tyramine were mostly less than 50 and 100 mgkg(-1), respectively. Freezing can effectively prevent the formation of BA, but the levels of putrescine, cadaverine, histamine and tyramine significantly increased (p<0.05) during storage at 4 and 25°C. The growth of mesophilic or psychrophilic bacteria in blue scad and octopus strongly and positively correlated with the formation of amines (such as putrescine, cadaverine, histamine and tyramine) during storage, except for histamine in octopus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Application of L1/2 regularization logistic method in heart disease diagnosis.

    PubMed

    Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying

    2014-01-01

    Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.

  1. Phenotypes of organ involvement in sarcoidosis.

    PubMed

    Schupp, Jonas Christian; Freitag-Wolf, Sandra; Bargagli, Elena; Mihailović-Vučinić, Violeta; Rottoli, Paola; Grubanovic, Aleksandar; Müller, Annegret; Jochens, Arne; Tittmann, Lukas; Schnerch, Jasmin; Olivieri, Carmela; Fischer, Annegret; Jovanovic, Dragana; Filipovic, Snežana; Videnovic-Ivanovic, Jelica; Bresser, Paul; Jonkers, René; O'Reilly, Kate; Ho, Ling-Pei; Gaede, Karoline I; Zabel, Peter; Dubaniewicz, Anna; Marshall, Ben; Kieszko, Robert; Milanowski, Janusz; Günther, Andreas; Weihrich, Anette; Petrek, Martin; Kolek, Vitezslav; Keane, Michael P; O'Beirne, Sarah; Donnelly, Seamas; Haraldsdottir, Sigridur Olina; Jorundsdottir, Kristin B; Costabel, Ulrich; Bonella, Francesco; Wallaert, Benoît; Grah, Christian; Peroš-Golubičić, Tatjana; Luisetti, Mauritio; Kadija, Zamir; Pabst, Stefan; Grohé, Christian; Strausz, János; Vašáková, Martina; Sterclova, Martina; Millar, Ann; Homolka, Jiří; Slováková, Alena; Kendrick, Yvonne; Crawshaw, Anjali; Wuyts, Wim; Spencer, Lisa; Pfeifer, Michael; Valeyre, Dominique; Poletti, Venerino; Wirtz, Hubertus; Prasse, Antje; Schreiber, Stefan; Krawczak, Michael; Müller-Quernheim, Joachim

    2018-01-01

    Sarcoidosis is a highly variable, systemic granulomatous disease of hitherto unknown aetiology. The GenPhenReSa (Genotype-Phenotype Relationship in Sarcoidosis) project represents a European multicentre study to investigate the influence of genotype on disease phenotypes in sarcoidosis.The baseline phenotype module of GenPhenReSa comprised 2163 Caucasian patients with sarcoidosis who were phenotyped at 31 study centres according to a standardised protocol.From this module, we found that patients with acute onset were mainly female, young and of Scadding type I or II. Female patients showed a significantly higher frequency of eye and skin involvement, and complained more of fatigue. Based on multidimensional correspondence analysis and subsequent cluster analysis, patients could be clearly stratified into five distinct, yet undescribed, subgroups according to predominant organ involvement: 1) abdominal organ involvement, 2) ocular-cardiac-cutaneous-central nervous system disease involvement, 3) musculoskeletal-cutaneous involvement, 4) pulmonary and intrathoracic lymph node involvement, and 5) extrapulmonary involvement.These five new clinical phenotypes will be useful to recruit homogenous cohorts in future biomedical studies. Copyright ©ERS 2018.

  2. Threshold and variability properties of matrix frequency-doubling technology and standard automated perimetry in glaucoma.

    PubMed

    Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-07-01

    To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.

  3. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  4. Design of static synchronous series compensator based damping controller employing invasive weed optimization algorithm.

    PubMed

    Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul

    2014-01-01

    This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach.

  5. Stock markets and criticality in the current economic crisis

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Zembrzuski, Marcelo; Correa, Fabio C.; Lamb, Luis C.

    2010-12-01

    We show that the current economic crisis has led the market to exhibit a non-critical behavior. We do so by analyzing the quantitative parameters of time series from the main assets of the Brazilian Stock Market BOVESPA. By monitoring global persistence we show a deviation of power law behavior during the crisis in a strong analogy with spin systems (from where this concept was originally conceived). Such behavior is corroborated by an emergent heavy tail of absolute return distribution and also by the magnitude autocorrelation exponent. Comparisons with universal exponents obtained in the international stock markets are also performed. This suggests how a thorough analysis of suitable exponents can bring a possible way of forecasting market crises characterized by non-criticality.

  6. Design for the optical retardation in broadband zero-order half-wave plates.

    PubMed

    Liu, Jin; Cai, Yi; Chen, Hongyi; Zeng, Xuanke; Zou, Da; Xu, Shixiang

    2011-04-25

    This paper presents a novel design for broadband zero-order half-wave plates to eliminate the first-order or up to second-order wavelength-dependent birefringent phase retardation (BPR) with 2 or 3 different birefringent materials. The residual BPRs of the plates increase monotonously with the wavelength deviation from a selected wavelength, so the plates are applicable to the broadband light pulses which gather most of the light energy around their central wavelengths. The model chooses the materials by the birefringent dispersion coefficient and evaluates the performances of the plates by the weighted average of the absolute value of residual BPR in order to emphasize the contributions of the incident spectral components whose possess higher energies.

  7. Evaluating least absolute deviation regression as an inverse model in groundwater flow calibration

    NASA Astrophysics Data System (ADS)

    Huddleston, John Matthew

    Though information regarding children's mental health is increasing, and we know that approximately 20% of children meet criteria for a mental disorder, little is known about the characteristics of the child client population at community mental health clinics. This study is an exploratory analysis of the demographic and treatment characteristics of the child client population at a psychology training clinic/community mental health center. Demographic and treatment information is presented and compared across various service categories as well as diagnostic categories. Comparisons between those served during the first six years and those served during the second six years of the study period are also made. Results are discussed in terms of generalizability of results as well as available information from the literature.

  8. Determination of enthalpies of formation of energetic molecules with composite quantum chemical methods

    DOE PAGES

    Manaa, M. Riad; Fried, Laurence E.; Kuo, I-Feng W.

    2016-02-01

    We report gas-phase enthalpies of formation for the set of energetic molecules NTO, DADE, LLM-105, TNT, RDX, TATB, HMX, and PETN using the G2, G3, G4, and ccCA-PS3 quantum composite methods. Calculations for HMX and PETN hitherto represent the largest molecules attempted with these methods. G3 and G4 calculations are typically close to one another, with a larger difference found between these methods and ccCA-PS3. Furthermore there is significant uncertainty in experimental values, the mean absolute deviation between the average experimental value and calculations are 12, 6, 7, and 3 kcal/mol for G2, G3, G4, and ccCA-PS3, respectively.

  9. Disease quantification on PET/CT images without object delineation

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Fitzpatrick, Danielle; Winchell, Nicole; Schuster, Stephen J.; Torigian, Drew A.

    2017-03-01

    The derivation of quantitative information from images to make quantitative radiology (QR) clinically practical continues to face a major image analysis hurdle because of image segmentation challenges. This paper presents a novel approach to disease quantification (DQ) via positron emission tomography/computed tomography (PET/CT) images that explores how to decouple DQ methods from explicit dependence on object segmentation through the use of only object recognition results to quantify disease burden. The concept of an object-dependent disease map is introduced to express disease severity without performing explicit delineation and partial volume correction of either objects or lesions. The parameters of the disease map are estimated from a set of training image data sets. The idea is illustrated on 20 lung lesions and 20 liver lesions derived from 18F-2-fluoro-2-deoxy-D-glucose (FDG)-PET/CT scans of patients with various types of cancers and also on 20 NEMA PET/CT phantom data sets. Our preliminary results show that, on phantom data sets, "disease burden" can be estimated to within 2% of known absolute true activity. Notwithstanding the difficulty in establishing true quantification on patient PET images, our results achieve 8% deviation from "true" estimates, with slightly larger deviations for small and diffuse lesions where establishing ground truth becomes really questionable, and smaller deviations for larger lesions where ground truth set up becomes more reliable. We are currently exploring extensions of the approach to include fully automated body-wide DQ, extensions to just CT or magnetic resonance imaging (MRI) alone, to PET/CT performed with radiotracers other than FDG, and other functional forms of disease maps.

  10. The warm and cold neutral phase in the local interstellar medium at absolute value of B greater than or equal to 10 deg

    NASA Astrophysics Data System (ADS)

    Poppel, W. G. L.; Marronetti, P.; Benaglia, P.

    1994-07-01

    We made a systematic separation of both the neutral phases using the atlases of 21-cm profiles of Heiles & Habing (1974) and Colomb et al. (1980), complemented with other data. First, we fitted the emission of the warm neutral medium (WNM) by means of a broad Gaussian curve (velocity dispersion sigma approximately 10-14 km/s). We derived maps of the column densities NWH and the radial velocities VW of the WNM. Its overall distribution appears to be very inhomogeneous with a large hole in the range b greater than or equal to +50 deg. However, if the hole is excluded, the mean latitude-profiles admit a rough cosec absolute value of b-fit common to both hemispheres. A kinematical analysis of VW for the range 10 deg less than or equal to absolute value of b less than or equal to 40 deg indicates a mean differential rotation with a small nodal deviation. At absolute value of b greater than 50 deg VW is negative, with larger values and discontinuities in the north. On the mean, sigma increases for absolute value of b decreasing, as is expected from differential rotation. From a statistical study of the peaks of the residual profiles we derived some characteristics of the cold neutral medium (CNM). The latter is generally characterized by a single component of sigma approximately 2-6 km/s. Additionally we derived the sky-distribution of the column densities NCH and the radial velocities VC of the CNM within bins of 1.2 deg sec b x 1 deg in l, b. Furthermore, we focused on the characteristics of Linblad's feature A of cool gas by considering the narrow ridge of local H I, which appears in the b-V contour maps at fixed l (e.g. Schoeber 1976). The ridge appears to be the main component of the CNM. We suggest a scenario for the formulation and evolution of the Gould belt system of stars and gas on the basis of an explosive event within a shingle of cold dense gas tilted to the galactic plane. The scenario appears to be consistent with the results found for both the neutral phases, as well as with Danly's (1989) optical and UV observations of interstellar cool gas in the lower halo.

  11. Filling the voids in the SRTM elevation model — A TIN-based delta surface approach

    NASA Astrophysics Data System (ADS)

    Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas

    The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.

  12. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  13. Ambulatory blood pressure monitoring-derived short-term blood pressure variability in primary hyperparathyroidism.

    PubMed

    Concistrè, A; Grillo, A; La Torre, G; Carretta, R; Fabris, B; Petramala, L; Marinelli, C; Rebellato, A; Fallo, F; Letizia, C

    2018-04-01

    Primary hyperparathyroidism is associated with a cluster of cardiovascular manifestations, including hypertension, leading to increased cardiovascular risk. The aim of our study was to investigate the ambulatory blood pressure monitoring-derived short-term blood pressure variability in patients with primary hyperparathyroidism, in comparison with patients with essential hypertension and normotensive controls. Twenty-five patients with primary hyperparathyroidism (7 normotensive,18 hypertensive) underwent ambulatory blood pressure monitoring at diagnosis, and fifteen out of them were re-evaluated after parathyroidectomy. Short-term-blood pressure variability was derived from ambulatory blood pressure monitoring and calculated as the following: 1) Standard Deviation of 24-h, day-time and night-time-BP; 2) the average of day-time and night-time-Standard Deviation, weighted for the duration of the day and night periods (24-h "weighted" Standard Deviation of BP); 3) average real variability, i.e., the average of the absolute differences between all consecutive BP measurements. Baseline data of normotensive and essential hypertension patients were matched for age, sex, BMI and 24-h ambulatory blood pressure monitoring values with normotensive and hypertensive-primary hyperparathyroidism patients, respectively. Normotensive-primary hyperparathyroidism patients showed a 24-h weighted Standard Deviation (P < 0.01) and average real variability (P < 0.05) of systolic blood pressure higher than that of 12 normotensive controls. 24-h average real variability of systolic BP, as well as serum calcium and parathyroid hormone levels, were reduced in operated patients (P < 0.001). A positive correlation of serum calcium and parathyroid hormone with 24-h-average real variability of systolic BP was observed in the entire primary hyperparathyroidism patients group (P = 0.04, P  = 0.02; respectively). Systolic blood pressure variability is increased in normotensive patients with primary hyperparathyroidism and is reduced by parathyroidectomy, and may potentially represent an additional cardiovascular risk factor in this disease.

  14. In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.

    PubMed

    Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert

    2016-09-01

    Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.

  15. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  16. The Primordial Inflation Explorer (PIXIE): A Nulling Polarimeter for Cosmic Microwave Background Observations

    NASA Technical Reports Server (NTRS)

    Kogut, Alan J.; Fixsen, D. J.; Chuss, D. T.; Dotson, J.; Dwek, E.; Halpern, M.; Hinshaw, G. F.; Meyer, S. M.; Moseley, S. H.; Seiffert, M. D.; hide

    2011-01-01

    The Primordial Inflation Explorer (PIXIE) is a concept for an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. The instrument consists of a polarizing Michelson interferometer configured as a nulling polarimeter to measure the difference spectrum between orthogonal linear polarizations from two co-aligned beams. Either input can view the sky or a temperature-controlled absolute reference blackbody calibrator. Rhe proposed instrument can map the absolute intensity and linear polarization (Stokes I, Q, and U parameters) over the full sky in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded optics provide background-limited sensitivity using only 4 detectors, while the highly symmetric design and multiple signal modulations provide robust rejection of potential systematic errors. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10..3 at 5 standard deviations. The rich PIXIE data set can also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy.

  17. Determination of Tangeretin in Rat Plasma: Assessment of Its Clearance and Absolute Oral Bioavailability.

    PubMed

    Elhennawy, Mai Gamal; Lin, Hai-Shu

    2017-12-29

    Tangeretin (TAN) is a dietary polymethoxylated flavone that possesses a broad scope of pharmacological activities. A simple high-performance liquid chromatography (HPLC) method was developed and validated in this study to quantify TAN in plasma of Sprague-Dawley rats. The lower limit of quantification (LLOQ) was 15 ng/mL; the intra- and inter-day assay variations expressed in the form of relative standard deviation (RSD) were all less than 10%; and the assay accuracy was within 100 ± 15%. Subsequently, pharmacokinetic profiles of TAN were explored and established. Upon single intravenous administration (10 mg/kg), TAN had rapid clearance ( Cl = 94.1 ± 20.2 mL/min/kg) and moderate terminal elimination half-life ( t 1/2 λz = 166 ± 42 min). When TAN was given as a suspension (50 mg/kg), poor but erratic absolute oral bioavailability (mean value < 3.05%) was observed; however, when TAN was given in a solution prepared with randomly methylated-β-cyclodextrin (50 mg/kg), its plasma exposure was at least doubled (mean bioavailability: 6.02%). It was obvious that aqueous solubility hindered the oral absorption of TAN and acted as a barrier to its oral bioavailability. This study will facilitate further investigations on the medicinal potentials of TAN.

  18. Time Series Forecasting of Daily Reference Evapotranspiration by Neural Network Ensemble Learning for Irrigation System

    NASA Astrophysics Data System (ADS)

    Manikumari, N.; Murugappan, A.; Vinodhini, G.

    2017-07-01

    Time series forecasting has gained remarkable interest of researchers in the last few decades. Neural networks based time series forecasting have been employed in various application areas. Reference Evapotranspiration (ETO) is one of the most important components of the hydrologic cycle and its precise assessment is vital in water balance and crop yield estimation, water resources system design and management. This work aimed at achieving accurate time series forecast of ETO using a combination of neural network approaches. This work was carried out using data collected in the command area of VEERANAM Tank during the period 2004 - 2014 in India. In this work, the Neural Network (NN) models were combined by ensemble learning in order to improve the accuracy for forecasting Daily ETO (for the year 2015). Bagged Neural Network (Bagged-NN) and Boosted Neural Network (Boosted-NN) ensemble learning were employed. It has been proved that Bagged-NN and Boosted-NN ensemble models are better than individual NN models in terms of accuracy. Among the ensemble models, Boosted-NN reduces the forecasting errors compared to Bagged-NN and individual NNs. Regression co-efficient, Mean Absolute Deviation, Mean Absolute Percentage error and Root Mean Square Error also ascertain that Boosted-NN lead to improved ETO forecasting performance.

  19. Non-contact weight measurement of flat-faced pharmaceutical tablets using terahertz transmission pulse delay measurements.

    PubMed

    Bawuah, Prince; Silfsten, Pertti; Ervasti, Tuomas; Ketolainen, Jarkko; Zeitler, J Axel; Peiponen, Kai-Erik

    2014-12-10

    By measuring the time delay of a terahertz pulse traversing a tablet, and hence its effective refractive index, it is possible to non-invasively and non-destructively detect the weight of tablets made of microcrystalline cellulose (MCC). Two sets of MCC tablets were used in the study: Set A (training set) consisted of 13 tablets with nominally constant height but varying porosities, whereas Set B (test set) comprised of 21 tablets with nominally constant porosity but different heights. A linear correlation between the estimated absolute weight based on the terahertz measurement and the measured weight of both sets of MCC tablets was found. In addition, it was possible to estimate the height of the tablets by utilizing the estimated absolute weight and calculating the relative change of height of each tablet with respect to an ideal tablet. A good agreement between the experimental and the calculated results was found highlighting the potential of this technique for in-line sensing of the weight, porosity and the relative change in height of the tablets compared to a reference/ideal tablet. In this context, we propose a quantitative quality control method to assess the deviations in porosity of tablets immediately after compaction. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Thermal Infrared Spectrometer for Earth Science Remote Sensing Applications—Instrument Modifications and Measurement Procedures

    PubMed Central

    Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek

    2011-01-01

    In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards. PMID:22346683

  1. Compensating for magnetic field inhomogeneity in multigradient-echo-based MR thermometry.

    PubMed

    Simonis, Frank F J; Petersen, Esben T; Bartels, Lambertus W; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2015-03-01

    MR thermometry (MRT) is a noninvasive method for measuring temperature that can potentially be used for radio frequency (RF) safety monitoring. This application requires measuring absolute temperature. In this study, a multigradient-echo (mGE) MRT sequence was used for that purpose. A drawback of this sequence, however, is that its accuracy is affected by background gradients. In this article, we present a method to minimize this effect and to improve absolute temperature measurements using MRI. By determining background gradients using a B0 map or by combining data acquired with two opposing readout directions, the error can be removed in a homogenous phantom, thus improving temperature maps. All scans were performed on a 3T system using ethylene glycol-filled phantoms. Background gradients were varied, and one phantom was uniformly heated to validate both compensation approaches. Independent temperature recordings were made with optical probes. Errors correlated closely to the background gradients in all experiments. Temperature distributions showed a much smaller standard deviation when the corrections were applied (0.21°C vs. 0.45°C) and correlated well with thermo-optical probes. The corrections offer the possibility to measure RF heating in phantoms more precisely. This allows mGE MRT to become a valuable tool in RF safety assessment. © 2014 Wiley Periodicals, Inc.

  2. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  3. Thermal infrared spectrometer for Earth science remote sensing applications-instrument modifications and measurement procedures.

    PubMed

    Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek

    2011-01-01

    In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards.

  4. Absolute, pressure-dependent validation of a calibration-free, airborne laser hygrometer transfer standard (SEALDH-II) from 5 to 1200 ppmv using a metrological humidity generator

    NASA Astrophysics Data System (ADS)

    Buchholz, Bernhard; Ebert, Volker

    2018-01-01

    Highly accurate water vapor measurements are indispensable for understanding a variety of scientific questions as well as industrial processes. While in metrology water vapor concentrations can be defined, generated, and measured with relative uncertainties in the single percentage range, field-deployable airborne instruments deviate even under quasistatic laboratory conditions up to 10-20 %. The novel SEALDH-II hygrometer, a calibration-free, tuneable diode laser spectrometer, bridges this gap by implementing a new holistic concept to achieve higher accuracy levels in the field. We present in this paper the absolute validation of SEALDH-II at a traceable humidity generator during 23 days of permanent operation at 15 different H2O mole fraction levels between 5 and 1200 ppmv. At each mole fraction level, we studied the pressure dependence at six different gas pressures between 65 and 950 hPa. Further, we describe the setup for this metrological validation, the challenges to overcome when assessing water vapor measurements on a high accuracy level, and the comparison results. With this validation, SEALDH-II is the first airborne, metrologically validated humidity transfer standard which links several scientific airborne and laboratory measurement campaigns to the international metrological water vapor scale.

  5. The reaction H + C4H2 - Absolute rate constant measurement and implication for atmospheric modeling of Titan

    NASA Technical Reports Server (NTRS)

    Nava, D. F.; Mitchell, M. B.; Stief, L. J.

    1986-01-01

    The absolute rate constant for the reaction H + C4H2 has been measured over the temperature (T) interval 210-423 K, using the technique of flash photolysis-resonance fluorescence. At each of the five temperatures employed, the results were independent of variations in C4H2 concentration, total pressure of Ar or N2, and flash intensity (i.e., the initial H concentration). The rate constant, k, was found to be equal to 1.39 x 10 to the -10th exp (-1184/T) cu cm/s, with an error of one standard deviation. The Arrhenius parameters at the high pressure limit determined here for the H + C4H2 reaction are consistent with those for the corresponding reactions of H with C2H2 and C3H4. Implications of the kinetic carbon chemistry results, particularly those at low temperature, are considered for models of the atmospheric carbon chemistry of Titan. The rate of this reaction, relative to that of the analogous, but slower, reaction of H + C2H2, appears to make H + C4H2 a very feasible reaction pathway for effective conversion of H atoms to molecular hydrogen in the stratosphere of Titan.

  6. What weather variables are important in predicting heat-related mortality? A new application of statistical learning methods

    PubMed Central

    Zhang, Kai; Li, Yun; Schwartz, Joel D.; O'Neill, Marie S.

    2014-01-01

    Hot weather increases risk of mortality. Previous studies used different sets of weather variables to characterize heat stress, resulting in variation in heat-mortality- associations depending on the metric used. We employed a statistical learning method – random forests – to examine which of various weather variables had the greatest impact on heat-related mortality. We compiled a summertime daily weather and mortality counts dataset from four U.S. cities (Chicago, IL; Detroit, MI; Philadelphia, PA; and Phoenix, AZ) from 1998 to 2006. A variety of weather variables were ranked in predicting deviation from typical daily all-cause and cause-specific death counts. Ranks of weather variables varied with city and health outcome. Apparent temperature appeared to be the most important predictor of heat-related mortality for all-cause mortality. Absolute humidity was, on average, most frequently selected one of the top variables for all-cause mortality and seven cause-specific mortality categories. Our analysis affirms that apparent temperature is a reasonable variable for activating heat alerts and warnings, which are commonly based on predictions of total mortality in next few days. Additionally, absolute humidity should be included in future heat-health studies. Finally, random forests can be used to guide choice of weather variables in heat epidemiology studies. PMID:24834832

  7. Simultaneous quantification of acetanilide herbicides and their oxanilic and sulfonic acid metabolites in natural waters.

    PubMed

    Heberle, S A; Aga, D S; Hany, R; Müller, S R

    2000-02-15

    This paper describes a procedure for simultaneous enrichment, separation, and quantification of acetanilide herbicides and their major ionic oxanilic acid (OXA) and ethanesulfonic acid (ESA) metabolites in groundwater and surface water using Carbopack B as a solid-phase extraction (SPE) material. The analytes adsorbed on Carbopack B were eluted selectively from the solid phase in three fractions containing the parent compounds (PCs), their OXA metabolites, and their ESA metabolites, respectively. The complete separation of the three compound classes allowed the analysis of the neutral PCs (acetochlor, alachlor, and metolachlor) and their methylated OXA metabolites by gas chromatography/mass spectrometry. The ESA compounds were analyzed by high-performance liquid chromatography with UV detection. The use of Carbopack B resulted in good recoveries of the polar metabolites even from large sample volumes (1 L). Absolute recoveries from spiked surface and groundwater samples ranged between 76 and 100% for the PCs, between 41 and 91% for the OXAs, and between 47 and 96% for the ESAs. The maximum standard deviation of the absolute recoveries was 12%. The method detection limits are between 1 and 8 ng/L for the PCs, between 1 and 7 ng/L for the OXAs, and between 10 and 90 ng/L for the ESAs.

  8. Inter-comparison of precipitable water among reanalyses and its effect on downscaling in the tropics

    NASA Astrophysics Data System (ADS)

    Takahashi, H. G.; Fujita, M.; Hara, M.

    2012-12-01

    This paper compared precipitable water (PW) among four major reanalyses. In addition, we also investigated the effect of the boundary conditions on downscaling in the tropics, using a regional climate model. The spatial pattern of PW in the reanalyses agreed closely with observations. However, the absolute amounts of PW in some reanalyses were very small compared to observations. The discrepancies of the 12-year mean PW in July over the Southeast Asian monsoon region exceeded the inter-annual standard deviation of the PW. There was also a discrepancy in tropical PWs throughout the year, an indication that the problem is not regional, but global. The downscaling experiments were conducted, which were forced by the different four reanalyses. The atmospheric circulation, including monsoon westerlies and various disturbances, was very small among the reanalyses. However, simulated precipitation was only 60 % of observed precipitation, although the dry bias in the boundary conditions was only 6 %. This result indicates that dry bias has large effects on precipitation in downscaling over the tropics. This suggests that a simulated regional climate downscaled from ensemble-mean boundary conditions is quite different from an ensemble-mean regional climate averaged over the several regional ones downscaled from boundary conditions of the ensemble members in the tropics. Downscaled models can provide realistic simulations of regional tropical climates only if the boundary conditions include realistic absolute amounts of PW. Use of boundary conditions that include realistic absolute amounts of PW in downscaling in the tropics is imperative at the present time. This work was partly supported by the Global Environment Research Fund (RFa-1101) of the Ministry of the Environment, Japan.

  9. SU-E-QI-21: Iodinated Contrast Agent Time Course In Human Brain Metastasis: A Study For Stereotactic Synchrotron Radiotherapy Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obeid, L; Esteve, F; Adam, J

    2014-06-15

    Purpose: Synchrotron stereotactic radiotherapy (SSRT) is an innovative treatment combining the selective accumulation of heavy elements in tumors with stereotactic irradiations using monochromatic medium energy x-rays from a synchrotron source. Phase I/II clinical trials on brain metastasis are underway using venous infusion of iodinated contrast agents. The radiation dose enhancement depends on the amount of iodine in the tumor and its time course. In the present study, the reproducibility of iodine concentrations between the CT planning scan day (Day 0) and the treatment day (Day 10) was assessed in order to predict dose errors. Methods: For each of days 0more » and 10, three patients received a biphasic intravenous injection of iodinated contrast agent (40 ml, 4 ml/s, followed by 160 ml, 0.5 ml/s) in order to ensure stable intra-tumoral amounts of iodine during the treatment. Two volumetric CT scans (before and after iodine injection) and a multi-slice dynamic CT of the brain were performed using conventional radiotherapy CT (Day 0) or quantitative synchrotron radiation CT (Day 10). A 3D rigid registration was processed between images. The absolute and relative differences of absolute iodine concentrations and their corresponding dose errors were evaluated in the GTV and PTV used for treatment planning. Results: The differences in iodine concentrations remained within the standard deviation limits. The 3D absolute differences followed a normal distribution centered at zero mg/ml with a variance (∼1 mg/ml) which is related to the image noise. Conclusion: The results suggest that dose errors depend only on the image noise. This study shows that stable amounts of iodine are achievable in brain metastasis for SSRT treatment in a 10 days interval.« less

  10. Transit dosimetry in IMRT with an a-Si EPID in direct detection configuration

    NASA Astrophysics Data System (ADS)

    Sabet, Mahsheed; Rowshanfarzad, Pejman; Vial, Philip; Menk, Frederick W.; Greer, Peter B.

    2012-08-01

    In this study an amorphous silicon electronic portal imaging device (a-Si EPID) converted to direct detection configuration was investigated as a transit dosimeter for intensity modulated radiation therapy (IMRT). After calibration to dose and correction for a background offset signal, the EPID-measured absolute IMRT transit doses for 29 fields were compared to a MatriXX two-dimensional array of ionization chambers (as reference) using Gamma evaluation (3%, 3 mm). The MatriXX was first evaluated as reference for transit dosimetry. The accuracy of EPID measurements was also investigated by comparison of point dose measurements by an ionization chamber on the central axis with slab and anthropomorphic phantoms in a range of simple to complex fields. The uncertainty in ionization chamber measurements in IMRT fields was also investigated by its displacement from the central axis and comparison with the central axis measurements. Comparison of the absolute doses measured by the EPID and MatriXX with slab phantoms in IMRT fields showed that on average 96.4% and 97.5% of points had a Gamma index<1 in head and neck and prostate fields, respectively. For absolute dose comparisons with anthropomorphic phantoms, the values changed to an average of 93.6%, 93.7% and 94.4% of points with Gamma index<1 in head and neck, brain and prostate fields, respectively. Point doses measured by the EPID and ionization chamber were within 3% difference for all conditions. The deviations introduced in the response of the ionization chamber in IMRT fields were<1%. The direct EPID performance for transit dosimetry showed that it has the potential to perform accurate, efficient and comprehensive in vivo dosimetry for IMRT.

  11. Improved venous suppression on renal MR angiography with recessed elliptical centric ordering of K-space.

    PubMed

    Ho, Bernard; Chao, Minh; Zhang, Hong Lei; Watts, Richard; Prince, Martin R

    2003-01-01

    To evaluate recessed elliptical centric ordering of k-space in renal magnetic resonance (MR) angiography. All imaging was performed on the same 1.5 T MR imaging system (GE Signa CVi) using the body coil for signal transmission and a phased array coil for reception. Gd, 30 ml, was injected manually at 2 ml/sec timed with automatic triggering (SmartPrep). In thirty patients using standard elliptical centric ordering, the scanner paused 8 seconds between detection of the leading edge of the Gd bolus and initiation of scanning beginning with the center of k-space. For the recessed-elliptical centric ordering in 20 consecutive patients, this delay was reduced to 4 seconds but the absolute center of k-space recessed in by 4 seconds such that in all patients the absolute center of k-space was acquired 8 seconds after detecting the leading edge of the bolus. On the arterial phase images signal-to-noise ratio (SNR) was measured in the aorta, each renal artery and vein and contrast-to-noise ratio (CNR) was measured relative to subcutaneous fat. The standard deviation of signal outside the patient was considered to be "noise" for calculation of SNR and CNR. Incidence of ringing artifact in the aorta and renal veins was noted. Aorta SNR and CNR was significantly higher with the recessed technique (p = 0.02) and the ratio of renal artery signal to renal vein signal was higher with the recessed technique, 4 ± 2, compared to standard elliptical centric, 3 ± 2 (p = 0.03). Ringing artifact was also reduced with the recessed technique in both the aorta and renal veins. Gadolinium-enhanced renal MR angiography is improved by recessing the absolute center of k-space.

  12. Wearable Vector Electrical Bioimpedance System to Assess Knee Joint Health

    PubMed Central

    Hersek, Sinan; Töreyin, Hakan; Teague, Caitlin N.; Millard-Stafford, Mindy L.; Jeong, Hyeon-Ki; Bavare, Miheer M.; Wolkoff, Paul; Sawka, Michael N.; Inan, Omer T.

    2017-01-01

    Objective We designed and validated a portable electrical bioimpedance (EBI) system to quantify knee joint health. Methods Five separate experiments were performed to demonstrate the: (1) ability of the EBI system to assess knee injury and recovery; (2) inter-day variability of knee EBI measurements; (3) sensitivity of the system to small changes in interstitial fluid volume; (4) reducing the error of EBI measurements using acceleration signals; (5) use of the system with dry electrodes integrated to a wearable knee wrap. Results (1) The absolute difference in resistance (R) and reactance (X) from the left to the right knee was able to distinguish injured and healthy knees (p<0.05); the absolute difference in R decreased significantly (p<0.05) in injured subjects following rehabilitation. (2) The average inter-day variability (standard deviation) of the absolute difference in knee R was 2.5Ω, and for X was, 1.2 Ω. (3) Local heating/cooling resulted in a significant decrease/increase in knee R (p<0.01). (4) The proposed subject position detection algorithm achieved 97.4% leave-one subject out cross-validated accuracy and 98.2% precision in detecting when the subject is in the correct position to take measurements. (5) Linear regression between the knee R and X measured using the wet electrodes and the designed wearable knee wrap were highly correlated (r2 = 0.8 and 0.9, respectively). Conclusion This work demonstrates the use of wearable EBI measurements in monitoring knee joint health. Significance The proposed wearable system has the potential for assessing knee joint health outside the clinic/lab and help guide rehabilitation. PMID:28026745

  13. Wearable Vector Electrical Bioimpedance System to Assess Knee Joint Health.

    PubMed

    Hersek, Sinan; Toreyin, Hakan; Teague, Caitlin N; Millard-Stafford, Mindy L; Jeong, Hyeon-Ki; Bavare, Miheer M; Wolkoff, Paul; Sawka, Michael N; Inan, Omer T

    2017-10-01

    We designed and validated a portable electrical bioimpedance (EBI) system to quantify knee joint health. Five separate experiments were performed to demonstrate the: 1) ability of the EBI system to assess knee injury and recovery; 2) interday variability of knee EBI measurements; 3) sensitivity of the system to small changes in interstitial fluid volume; 4) reducing the error of EBI measurements using acceleration signals; and 5) use of the system with dry electrodes integrated to a wearable knee wrap. 1) The absolute difference in resistance ( R) and reactance (X) from the left to the right knee was able to distinguish injured and healthy knees (p < 0.05); the absolute difference in R decreased significantly (p < 0.05) in injured subjects following rehabilitation. 2) The average interday variability (standard deviation) of the absolute difference in knee R was 2.5 Ω and for X was 1.2 Ω. 3) Local heating/cooling resulted in a significant decrease/increase in knee R (p < 0.01). 4) The proposed subject position detection algorithm achieved 97.4% leave-one subject out cross-validated accuracy and 98.2% precision in detecting when the subject is in the correct position to take measurements. 5) Linear regression between the knee R and X measured using the wet electrodes and the designed wearable knee wrap were highly correlated ( R 2 = 0.8 and 0.9, respectively). This study demonstrates the use of wearable EBI measurements in monitoring knee joint health. The proposed wearable system has the potential for assessing knee joint health outside the clinic/lab and help guide rehabilitation.

  14. Evaluation of factors affecting CGMS calibration.

    PubMed

    Buckingham, Bruce A; Kollman, Craig; Beck, Roy; Kalajian, Andrea; Fiallo-Scharer, Rosanna; Tansey, Michael J; Fox, Larry A; Wilson, Darrell M; Weinzimer, Stuart A; Ruedy, Katrina J; Tamborlane, William V

    2006-06-01

    The optimal number/timing of calibrations entered into the CGMS (Medtronic MiniMed, Northridge, CA) continuous glucose monitoring system have not been previously described. Fifty subjects with Type 1 diabetes mellitus (10-18 years old) were hospitalized in a clinical research center for approximately 24 h on two separate days. CGMS and OneTouch Ultra meter (LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13%, and 13% when using three, four, five, and seven calibration values, respectively (P < 0.001). Corresponding percentages of CGMS-reference pairs meeting the International Organisation for Standardisation criteria were 66%, 67%, 71%, and 72% (P < 0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9 p.m. and 6 a.m. (median difference, -2 vs. -9 mg/dL, P < 0.001; median RAD, 12% vs. 15%, P = 0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5 to <1.0, 1.0 to <1.5, and >or=1.5 mg/dL/min, median RAD values were 13% versus 14% versus 17% versus 19%, respectively (P = 0.05). Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy.

  15. Evaluation of Factors Affecting CGMS Calibration

    PubMed Central

    2006-01-01

    Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753

  16. Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094

    NASA Technical Reports Server (NTRS)

    Minter, R. T. (Principal Investigator)

    1972-01-01

    The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.

  17. Effects of Nonsphericity on the Behavior of Lorenz-Mie Resonances in Scattering Characteristics of Liquid-Cloud Droplets

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2014-01-01

    By using the results of highly accurate T-matrix computations for randomly oriented oblate and prolate spheroids and Chebyshev particles with varying degrees of asphericity, we analyze the effects of a deviation of water-droplet shapes from that of a perfect sphere on the behavior of Lorenz-Mie morphology-dependent resonances of various widths. We demonstrate that the positions and profiles of the resonances can change significantly with increasing asphericity. The absolute degree of asphericity required to suppress a Lorenz-Mie resonance is approximately proportional to the resonance width. Our results imply that numerical averaging of scattering characteristics of real cloud droplets over sizes may rely on a significantly coarser size-parameter resolution than that required for ideal, perfectly spherical particles.

  18. Automatic Seizure Detection in Rats Using Laplacian EEG and Verification with Human Seizure Signals

    PubMed Central

    Feltane, Amal; Boudreaux-Bartels, G. Faye; Besio, Walter

    2012-01-01

    Automated detection of seizures is still a challenging problem. This study presents an approach to detect seizure segments in Laplacian electroencephalography (tEEG) recorded from rats using the tripolar concentric ring electrode (TCRE) configuration. Three features, namely, median absolute deviation, approximate entropy, and maximum singular value were calculated and used as inputs into two different classifiers: support vector machines and adaptive boosting. The relative performance of the extracted features on TCRE tEEG was examined. Results are obtained with an overall accuracy between 84.81 and 96.51%. In addition to using TCRE tEEG data, the seizure detection algorithm was also applied to the recorded EEG signals from Andrzejak et al. database to show the efficiency of the proposed method for seizure detection. PMID:23073989

  19. Optimation of cooled shields in insulations

    NASA Technical Reports Server (NTRS)

    Chato, J. C.; Khodadadi, J. M.; Seyed-Yagoobi, J.

    1984-01-01

    A method to optimize the location, temperature, and heat dissipation rate of each cooled shield inside an insulation layer was developed. The method is based on the minimization of the entropy production rate which is proportional to the heat leak across the insulation. It is shown that the maximum number of shields to be used in most practical applications is three. However, cooled shields are useful only at low values of the overall, cold wall to hot wall absolute temperature ratio. The performance of the insulation system is relatively insensitive to deviations from the optimum values of the temperature and location of the cooling shields. Design curves for rapid estimates of the locations and temperatures of cooling shields in various types of insulations, and an equation for calculating the cooling loads for the shields are presented.

  20. Frequency Measurements of Superradiance from the Strontium Clock Transition

    NASA Astrophysics Data System (ADS)

    Norcia, Matthew A.; Cline, Julia R. K.; Muniz, Juan A.; Robinson, John M.; Hutson, Ross B.; Goban, Akihisa; Marti, G. Edward; Ye, Jun; Thompson, James K.

    2018-04-01

    We present the first characterization of the spectral properties of superradiant light emitted from the ultranarrow, 1-mHz-linewidth optical clock transition in an ensemble of cold Sr 87 atoms. Such a light source has been proposed as a next-generation active atomic frequency reference, with the potential to enable high-precision optical frequency references to be used outside laboratory environments. By comparing the frequency of our superradiant source to that of a state-of-the-art cavity-stabilized laser and optical lattice clock, we observe a fractional Allan deviation of 6.7 (1 )×10-16 at 1 s of averaging, establish absolute accuracy at the 2-Hz (4 ×10-15 fractional frequency) level, and demonstrate insensitivity to key environmental perturbations.

  1. Cosmic microwave background dipole spectrum measured by the COBE FIRAS instrument

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Isaacman, R. B.; Mather, J. C.; Meyer, S. S.; Noerdlinger, P. D.; Shafer, R. A.; Weiss, R.

    1994-01-01

    The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) has determined the dipole spectrum of the cosmic microwave background radiation (CMBR) from 2 to 20/cm. For each frequency the signal is decomposed by fitting to a monopole, a dipole, and a Galactic template for approximately 60% of the sky. The overall dipole spectrum fits the derivative of a Planck function with an amplitude of 3.343 +/- 0.016 mK (95% confidence level), a temperature of 2.714 +/- 0.022 K (95% confidence level), and an rms deviation of 6 x 10(exp -9) ergs/sq cm/s/sr cm limited by a detector and cosmic-ray noise. The monopole temperature is consistent with that determined by direct measurement in the accompanying article by Mather et al.

  2. Gibbs Energy Additivity Approaches in Estimation of Dynamic Viscosities of n-Alkane-1-ol

    NASA Astrophysics Data System (ADS)

    Phankosol, S.; Krisnangkura, K.

    2017-09-01

    Alcohols are solvents for organic and inorganic substances. Dynamic viscosity of liquid is important transport properties. In this study models for estimating n-alkan-1-ol dynamic viscosities are correlated to the Martin’s rule of free energy additivity. Data available in literatures are used to validate and support the proposed equations. The dynamic viscosities of n-alkan-1-ol can be easily estimated from its carbon numbers (nc) and temperatures (T). The bias, average absolute deviation and coefficient of determination (R2) in estimating of n-alkan-1-ol are -0.17%, 1.73% and 0.999, respectively. The dynamic viscosities outside temperature between 288.15 and 363.15 K may be possibly estimated by this model but accuracy may be lower.

  3. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  4. Multicentre dose audit for clinical trials of radiation therapy in Asia.

    PubMed

    Mizuno, Hideyuki; Fukuda, Shigekazu; Fukumura, Akifumi; Nakamura, Yuzuru-Kutsutani; Jianping, Cao; Cho, Chul-Koo; Supriana, Nana; Dung, To Anh; Calaguas, Miriam Joy; Devi, C R Beena; Chansilpa, Yaowalak; Banu, Parvin Akhter; Riaz, Masooma; Esentayeva, Surya; Kato, Shingo; Karasawa, Kumiko; Tsujii, Hirohiko

    2017-05-01

    A dose audit of 16 facilities in 11 countries has been performed within the framework of the Forum for Nuclear Cooperation in Asia (FNCA) quality assurance program. The quality of radiation dosimetry varies because of the large variation in radiation therapy among the participating countries. One of the most important aspects of international multicentre clinical trials is uniformity of absolute dose between centres. The National Institute of Radiological Sciences (NIRS) in Japan has conducted a dose audit of participating countries since 2006 by using radiophotoluminescent glass dosimeters (RGDs). RGDs have been successfully applied to a domestic postal dose audit in Japan. The authors used the same audit system to perform a dose audit of the FNCA countries. The average and standard deviation of the relative deviation between the measured and intended dose among 46 beams was 0.4% and 1.5% (k = 1), respectively. This is an excellent level of uniformity for the multicountry data. However, of the 46 beams measured, a single beam exceeded the permitted tolerance level of ±5%. We investigated the cause for this and solved the problem. This event highlights the importance of external audits in radiation therapy. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  5. Distribution of the near-earth objects

    NASA Astrophysics Data System (ADS)

    Emel'Yanenko, V. V.; Naroenkov, S. A.; Shustov, B. M.

    2011-12-01

    This paper analyzes the distribution of the orbits of near-Earth minor bodies from the data on more than 7500 objects. The distribution of large near-Earth objects (NEOs) with absolute magnitudes of H < 18 is generally consistent with the earlier predictions (Bottke et al., 2002; Stuart, 2003), although we have revealed a previously undetected maximum in the distribution of perihelion distances q near q = 0.5 AU. The study of the orbital distribution for the entire sample of all detected objects has found new significant features. In particular, the distribution of perihelion longitudes seriously deviates from a homogeneous pattern; its variations are roughly 40% of its mean value. These deviations cannot be stochastic, which is confirmed by the Kolmogorov-Smirnov test with a more than 0.9999 probability. These features can be explained by the dynamic behavior of the minor bodies related to secular resonances with Jupiter. For the objects with H < 18, the variations in the perihelion longitude distribution are not so apparent. By extrapolating the orbital characteristics of the NEOs with H < 18, we have obtained longitudinal, latitudinal, and radial distributions of potentially hazardous objects in a heliocentric ecliptic coordinate frame. The differences in the orbital distributions of objects of different size appear not to be a consequence of observational selection, but could indicate different sources of the NEOs.

  6. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  7. A novel strategy for forensic age prediction by DNA methylation and support vector regression model

    PubMed Central

    Xu, Cheng; Qu, Hongzhu; Wang, Guangyu; Xie, Bingbing; Shi, Yi; Yang, Yaran; Zhao, Zhao; Hu, Lan; Fang, Xiangdong; Yan, Jiangwei; Feng, Lei

    2015-01-01

    High deviations resulting from prediction model, gender and population difference have limited age estimation application of DNA methylation markers. Here we identified 2,957 novel age-associated DNA methylation sites (P < 0.01 and R2 > 0.5) in blood of eight pairs of Chinese Han female monozygotic twins. Among them, nine novel sites (false discovery rate < 0.01), along with three other reported sites, were further validated in 49 unrelated female volunteers with ages of 20–80 years by Sequenom Massarray. A total of 95 CpGs were covered in the PCR products and 11 of them were built the age prediction models. After comparing four different models including, multivariate linear regression, multivariate nonlinear regression, back propagation neural network and support vector regression, SVR was identified as the most robust model with the least mean absolute deviation from real chronological age (2.8 years) and an average accuracy of 4.7 years predicted by only six loci from the 11 loci, as well as an less cross-validated error compared with linear regression model. Our novel strategy provides an accurate measurement that is highly useful in estimating the individual age in forensic practice as well as in tracking the aging process in other related applications. PMID:26635134

  8. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    PubMed

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Cardiopulmonary fitness and muscle strength in patients with osteogenesis imperfecta type I.

    PubMed

    Takken, Tim; Terlingen, Heike C; Helders, Paul J M; Pruijs, Hans; Van der Ent, Cornelis K; Engelbert, Raoul H H

    2004-12-01

    To evaluate cardiopulmonary function, muscle strength, and cardiopulmonary fitness (VO 2 peak) in patients with osteogenesis imperfecta (OI). In 17 patients with OI type I (mean age 13.3 +/- 3.9 years) cardiopulmonary function was assessed at rest using spirometry, plethysmography, electrocardiography, and echocardiography. Exercise capacity was measured using a maximal exercise test on a bicycle ergometer and an expired gas analysis system. Muscle strength in shoulder abductors, hip flexors, ankle dorsal flexor, and grip strength were measured. All results were compared with reference values. Cardiopulmonary function at rest was within normal ranges, but when it was compared with normal height for age and sex, vital capacities were reduced. Mean absolute and relative VO 2 peak were respectively -1.17 (+/- 0.67) and -1.41 (+/- 1.52) standard deviations lower compared with reference values ( P < .01). Muscle strength also was significantly reduced in patients with OI, ranging from -1.24 +/- 1.40 to -2.88 +/- 2.67 standard deviations lower compared with reference values. In patients with OI type I, no pulmonary or cardiac abnormalities at rest were found. The exercise tolerance and muscle strength were significantly reduced in patients with OI, which might account for their increased levels of fatigue during activities of daily living.

  10. A comparison of performance of several artificial intelligence methods for predicting the dynamic viscosity of TiO2/SAE 50 nano-lubricant

    NASA Astrophysics Data System (ADS)

    Hemmat Esfe, Mohammad; Tatar, Afshin; Ahangar, Mohammad Reza Hassani; Rostamian, Hossein

    2018-02-01

    Since the conventional thermal fluids such as water, oil, and ethylene glycol have poor thermal properties, the tiny solid particles are added to these fluids to increase their heat transfer improvement. As viscosity determines the rheological behavior of a fluid, studying the parameters affecting the viscosity is crucial. Since the experimental measurement of viscosity is expensive and time consuming, predicting this parameter is the apt method. In this work, three artificial intelligence methods containing Genetic Algorithm-Radial Basis Function Neural Networks (GA-RBF), Least Square Support Vector Machine (LS-SVM) and Gene Expression Programming (GEP) were applied to predict the viscosity of TiO2/SAE 50 nano-lubricant with Non-Newtonian power-law behavior using experimental data. The correlation factor (R2), Average Absolute Relative Deviation (AARD), Root Mean Square Error (RMSE), and Margin of Deviation were employed to investigate the accuracy of the proposed models. RMSE values of 0.58, 1.28, and 6.59 and R2 values of 0.99998, 0.99991, and 0.99777 reveal the accuracy of the proposed models for respective GA-RBF, CSA-LSSVM, and GEP methods. Among the developed models, the GA-RBF shows the best accuracy.

  11. Time reversal for localization of sources of infrasound signals in a windy stratified atmosphere.

    PubMed

    Lonzaga, Joel B

    2016-06-01

    Time reversal is used for localizing sources of recorded infrasound signals propagating in a windy, stratified atmosphere. Due to the convective effect of the background flow, the back-azimuths of the recorded signals can be substantially different from the source back-azimuth, posing a significant difficulty in source localization. The back-propagated signals are characterized by negative group velocities from which the source back-azimuth and source-to-receiver (STR) distance can be estimated using the apparent back-azimuths and trace velocities of the signals. The method is applied to several distinct infrasound arrivals recorded by two arrays in the Netherlands. The infrasound signals were generated by the Buncefield oil depot explosion in the U.K. in December 2005. Analyses show that the method can be used to substantially enhance estimates of the source back-azimuth and the STR distance. In one of the arrays, for instance, the deviations between the measured back-azimuths of the signals and the known source back-azimuth are quite large (-1° to -7°), whereas the deviations between the predicted and known source back-azimuths are small with an absolute mean value of <1°. Furthermore, the predicted STR distance is off only by <5% of the known STR distance.

  12. Apparent diffusion coefficient is highly reproducible on preclinical imaging systems: Evidence from a seven-center multivendor study.

    PubMed

    Doblas, Sabrina; Almeida, Gilberto S; Blé, François-Xavier; Garteiser, Philippe; Hoff, Benjamin A; McIntyre, Dominick J O; Wachsmuth, Lydia; Chenevert, Thomas L; Faber, Cornelius; Griffiths, John R; Jacobs, Andreas H; Morris, David M; O'Connor, James P B; Robinson, Simon P; Van Beers, Bernard E; Waterton, John C

    2015-12-01

    To evaluate between-site agreement of apparent diffusion coefficient (ADC) measurements in preclinical magnetic resonance imaging (MRI) systems. A miniaturized thermally stable ice-water phantom was devised. ADC (mean and interquartile range) was measured over several days, on 4.7T, 7T, and 9.4T Bruker, Agilent, and Magnex small-animal MRI systems using a common protocol across seven sites. Day-to-day repeatability was expressed as percent variation of mean ADC between acquisitions. Cross-site reproducibility was expressed as 1.96 × standard deviation of percent deviation of ADC values. ADC measurements were equivalent across all seven sites with a cross-site ADC reproducibility of 6.3%. Mean day-to-day repeatability of ADC measurements was 2.3%, and no site was identified as presenting different measurements than others (analysis of variance [ANOVA] P = 0.02, post-hoc test n.s.). Between-slice ADC variability was negligible and similar between sites (P = 0.15). Mean within-region-of-interest ADC variability was 5.5%, with one site presenting a significantly greater variation than the others (P = 0.0013). Absolute ADC values in preclinical studies are comparable between sites and equipment, provided standardized protocols are employed. © 2015 Wiley Periodicals, Inc.

  13. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  14. SU-F-T-472: Validation of Absolute Dose Measurements for MR-IGRT With and Without Magnetic Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, O; Li, H; Goddu, S

    Purpose: To validate absolute dose measurements for a MR-IGRT system without presence of the magnetic field. Methods: The standard method (AAPM’s TG-51) of absolute dose measurement with ionization chambers was tested with and without the presence of the magnetic field for a clinical 0.32-T Co-60 MR-IGRT system. Two ionization chambers were used - the Standard Imaging (Madison, WI) A18 (0.123 cc) and the PTW (Freiburg, Germany). A previously reported Monte Carlo simulation suggested a difference on the order of 0.5% for dose measured with and without the presence of the magnetic field, but testing this was not possible until anmore » engineering solution to allow the radiation system to be used without the nominal magnetic field was found. A previously identified effect of orientation in the magnetic field was also tested by placing the chamber either parallel or perpendicular to the field and irradiating from two opposing angles (90 and 270). Finally, the Imaging and Radiation Oncology Core provided OSLD detectors for five irradiations each with and without the field - with two heads at both 0 and 90 degrees, and one head at 90 degrees only as it doesn’t reach 0 (IEC convention). Results: For the TG-51 comparison, expected dose was obtained by decaying values measured at the time of source installation. The average measured difference was 0.4%±0.12% for A18 and 0.06%±0.15% for Farmer chamber. There was minimal (0.3%) orientation dependence without the magnetic field for the A18 chamber, while previous measurements with the magnetic field had a deviation of 3.2% with chamber perpendicular to magnetic field. Results reported by IROC for the OSLDs with and without the field had a maximum difference of 2%. Conclusion: Accurate absolute dosimetry was verified by measurement under the same conditions with and without the magnetic field for both ionization chambers and independently-verifiable OSLDs.« less

  15. Some observations aimed at improving the success rate of paleointensity experiments for lava flows (Invited)

    NASA Astrophysics Data System (ADS)

    Valet, J. M.; Herrero-Bervera, E.

    2009-12-01

    Emile Thellier did not believe to the possibility of obtaining reliable determinations of absolute paleointensity from lava flows and defended that only archeomagnetic material was suitable. Many protocols have been proposed over the past fifty years to defend that this assertion was not really justified. We have performed paleointensity studies on contemporaneous flows in Hawaii and in the Canaries. To those we have added determinations obtained from relatively recent flows at Santorini. The hawaiian flows that are dominated by pure magnetite with a narrow distribution of grain sizes provide by far the most accurate determinations of paleointensity. Such characteristics are simply derived from the spectrum of unbloking temperatures. Thus the evolution of the TRM upon thermal demagnetization appears to be a very important feature for successfull paleointensity experiments. The existence of a sharp decrease of the magnetization before reaching the unique Curie temperature of the rock is conclusively a very appropriate condition for obtaining suitable field determinations. Of course, these characteristics are only valid if the pTRM checks do not deviate from the original TRM. In this respect, we have noticed that deviations larger than 5% are frequently associated with significant deviations from the expected field intensity. The results from the Canary islands are also consistent with this observation despite the presence of a larger amount of titanium. Overall, these conclusions make sense when faced to Thellier’s statement regarding the success of archeomagnetic material. Indeed, the features that have been outlined above are typical of the characteristics found in archeological materials which have been largely oxidized during cooling and are dominated by a single magnetic mineral with a tiny distribution of grain sizes.

  16. The gap technique does not rotate the femur parallel to the epicondylar axis.

    PubMed

    Matziolis, Georg; Boenicke, Hinrich; Pfiel, Sascha; Wassilew, Georgi; Perka, Carsten

    2011-02-01

    In the analysis of painful total knee replacements, the surgical epicondylar axis (SEA) has become established as a standard in the diagnosis of femoral component rotation. It remains unclear whether the gap technique widely used to determine femoral rotation, when applied correctly, results in a rotation parallel to the SEA. In this prospective study, 69 patients (69 joints) were included who received a navigated bicondylar surface replacement due to primary arthritis of the knee joint. In 67 cases in which a perfect soft-tissue balancing of the extension gap (<1° asymmetry) was achieved, the flexion gap and the rotation of the femoral component necessary for its symmetry was determined and documented. The femoral component was implanted additionally taking into account the posterior condylar axis and the Whiteside's line. Postoperatively, the rotation of the femoral component to the SEA was determined and this was used to calculate the angle between a femur implanted according to the gap technique and the SEA. If the gap technique had been used consistently, it would have resulted in a deviation of the femoral components by -0.6° ± 2.9° (-7.4°-5.9°) from the SEA. The absolute deviation would have been 2.4° ± 1.8°, with a range between 0.2° and 7.4°. Even if the extension gap is perfectly balanced, the gap technique does not lead to a parallel alignment of the femoral component to the SEA. Since the clinical results of this technique are equivalent to those of the femur first technique in the literature, an evaluation of this deviation as a malalignment must be considered critically.

  17. Anterior capsulotomy with an ultrashort-pulse laser.

    PubMed

    Tackman, Ramon Naranjo; Kuri, Jorge Villar; Nichamin, Louis D Skip; Edwards, Keith

    2011-05-01

    To assess the precision of laser anterior capsulotomy compared with that of manual continuous curvilinear capsulorhexis (CCC). Asociación Para Evitar La Ceguera en México IAP, Hospital Dr. Luis Sánchez Bulnes, Mexico City, Mexico. Nonrandomized single-center clinical trial. In patients presenting for cataract surgery, the LensAR Laser System was used to create a laser anterior capsulotomy of the surgeon's desired size. Capsule buttons were retrieved and measured and then compared with buttons retrieved from eyes having a manually torn CCC. Deviation from the intended diameter and the regularity of shape were assessed. When removing the capsule buttons at the start of surgery, the surgeon rated the ease of removal on a scale of 1 to 10 (1 = required manual capsulorhexis around the whole diameter; 10 = button free floating or required no manual detachment from remaining capsule during removal). The mean deviation from the intended diameter was 0.16 mm ± 0.17 (SD) for laser anterior capsulotomy and 0.42 ± 0.54 mm for CCC (P=.03). The mean absolute deviation from the intended diameter was 0.20 ± 0.12 mm and 0.49 ± 0.47 mm, respectively (P=.003). The mean of the average squared residuals was 0.01 ± 0.03 and 0.02 ± 0.04, respectively (P=.09). The median rating of the ease of removal was 9 (range 5 to 10). Laser anterior capsulotomy created a more precise capsule opening than CCC, and the buttons created by the laser procedure were easy to remove at the beginning of cataract surgery. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  18. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  19. The Primordial Inflation Explorer (PIXIE)

    NASA Technical Reports Server (NTRS)

    Kogut, Alan; Chluba, Jens; Fixsen, Dale J.; Meyer, Stephan; Spergel, David

    2016-01-01

    The Primordial Inflation Explorer is an Explorer-class mission to open new windows on the early universe through measurements of the polarization and absolute frequency spectrum of the cosmic microwave background. PIXIE will measure the gravitational-wave signature of primordial inflation through its distinctive imprint in linear polarization, and characterize the thermal history of the universe through precision measurements of distortions in the blackbody spectrum. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning over 7 octaves in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded non-imaging optics feed a polarizing Fourier Transform Spectrometer to produce a set of interference fringes, proportional to the difference spectrum between orthogonal linear polarizations from the two input beams. Multiple levels of symmetry and signal modulation combine to reduce systematic errors to negligible levels. PIXIE will map the full sky in Stokes I, Q, and U parameters with angular resolution 2.6 degrees and sensitivity 70 nK per 1degree square pixel. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10(exp. -3) at 5 standard deviations. The PIXIE mission complements anticipated ground-based polarization measurements such as CMBS4, providing a cosmic-variance-limited determination of the large-scale E-mode signal to measure the optical depth, constrain models of reionization, and provide a firm detection of the neutrino mass (the last unknown parameter in the Standard Model of particle physics). In addition, PIXIE will measure the absolute frequency spectrum to characterize deviations from a blackbody with sensitivity 3 orders of magnitude beyond the seminal COBE/FIRAS limits. The sky cannot be black at this level; the expected results will constrain physical processes ranging from inflation to the nature of the first stars and the physical conditions within the interstellar medium of the Galaxy. We describe the PIXIE instrument and mission architecture required to measure the CMB to the limits imposed by astrophysical foregrounds.

  20. Paleointensity study of the historical andesitic lava flows: LTD-DHT Shaw and Thellier paleointensities from the Sakurajima 1914 and 1946 lavas in Japan

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Hoshi, H.

    2005-12-01

    Correct determination of absolute paleointensities is essential to investigate past geomagnetic field. There are two types of methods to obtain the paleointensities: the Thellier-type and Shaw-type methods. Many paleomagnetists have so far regarded the former method as reliable. However, there are increasing evidences that it is sometimes not robust for basaltic lavas resulting in systematic high paleointensities (e.g. Calvo et al., 2002; Yamamoto et al., 2003). Alternatively, the double heating technique of the Shaw method combined with low temperature demagnetization (LTD-DHT Shaw method; Tsunakawa et al., 1997; Yamamoto et al., 2003), a lately developed paleointensity technique in Japan, can yield reliable answers even from such basaltic samples (e.g. Yamamoto et al., 2003; Mochizuki et al., 2004; Oishi et al., 2005). In the Japanese archipelago, there are not only basaltic lavas but also andesitic lavas. They are important candidates of the absolute paleointensity determination in Japan. For a case study, we sampled oriented paleomagnetic cores from three sites of the Sakurajima 1914 (TS01 and TS02) and 1946 (SW01) lavas in Japan. Several rock magnetic experiments revealed that main magnetic carriers of the present samples are titanomagnetites with Curie temperatures of about 300-550 C, and that high temperature oxidation progresses in the order of SW01, TS01 and TS02. The LTD-DHT Shaw and Coe-Thellier experiments were conducted on 72 and 63 specimens, respectively. They gave 64 and 60 successful determinations. If the results are normalized by expected field intensities calculated from IGRF-9 (Macmillan et al., 2003) and grouped into LTD-DHT Shaw and Thellier datasets, their averages and standard deviations (1 sigma) resulted in 0.98+/-0.11 (LTD-DHT Shaw) and 1.13+/-0.13 (Thellier). Considering the standard deviations, we can say that both paleointensity methods recovered correct geomagnetic field. However, it is apparent that the LTD-DHT Shaw method has higher reliability than the Thellier method.

  1. Quantitative ionization chamber alignment to a water surface: Theory and simulation.

    PubMed

    Siebers, Jeffrey V; Ververs, James D; Tessier, Frédéric

    2017-07-01

    To examine the response properties of cylindrical cavity ionization chambers (ICs) in the depth-ionization buildup region so as to obtain a robust chamber-signal - based method for definitive water surface identification, hence absolute ionization chamber depth localization. An analytical model with simplistic physics and geometry is developed to explore the theoretical aspects of ionization chamber response near a phantom water surface. Monte Carlo simulations with full physics and ionization chamber geometry are utilized to extend the model's findings to realistic ion chambers in realistic beams and to study the effects of IC design parameters on the entrance dose response. Design parameters studied include full and simplified IC designs with varying central electrode thickness, wall thickness, and outer chamber radius. Piecewise continuous fits to the depth-ionization signal gradient are used to quantify potential deviation of the gradient discontinuity from the chamber outer radius. Exponential, power, and hyperbolic sine functional forms are used to model the gradient for chamber depths of zero to the depth of the gradient discontinuity. The depth-ionization gradient as a function of depth is maximized and discontinuous when a submerged IC's outer radius coincides with the water surface. We term this depth the gradient chamber alignment point (gCAP). The maximum deviation between the gCAP location and the chamber outer radius is 0.13 mm for a hypothetical 4 mm thick wall, 6.45 mm outer radius chamber using the power function fit, however, the chamber outer radius is within the 95% confidence interval of the gCAP determined by this fit. gCAP dependence on the chamber wall thickness is possible, but not at a clinically relevant level. The depth-ionization gradient has a discontinuity and is maximized when the outer-radius of a submerged IC coincides with the water surface. This feature can be used to auto-align ICs to the water surface at the time of scanning and/or be applied retrospectively to scan data to quantify absolute IC depth. Utilization of the gCAP should yield accurate and reproducible depth calibration for clinical depth-ionization measurements between setups and between users. © 2017 American Association of Physicists in Medicine.

  2. SU-E-T-188: Commission of World 1st Commercial Compact PBS Proton System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Patel, B; Song, X

    2015-06-15

    Purpose: ProteusONE is the 1st commercial compact PBS proton system with an upstream scanning gantry and C230 cyclotron. We commissioned XiO and Raystation TPS simultaneously. This is a summary of beam data collection, modeling, and verification and comparison without range shiter for this unique system with both TPS. Methods: Both Raystation and XiO requires the same measurements data: (i) integral depth dose(IDDs) of single central spot measured in water tank; (ii) absolute dose calibration measured at 2cm depth of water with mono-energetic 10×10 cm2 field with spot spacing 4mm, 1MU per spot; and (iii) beam spot characteristics in air atmore » 0cm and ± 20cm away from ISO. To verify the beam model for both TPS, same 15 cube plans were created to simulate different treatment sites, target volumes and positions. PDDs of each plan were measured using a Multi-layer Ionization Chamber(MLIC), absolute point dose verification were measured using PPC05 in water tank and patient-specific QA were measured using MatriXX PT, a 2D ion chamber array. Results: All the point dose measurements at midSOBP were within 2% for both XiO and Raystation. However, up to 5% deviations were observed in XiO’s plans at shallow depth while within 2% in Raystation plans. 100% of the ranges measured were within 1 mm with maximum deviation of 0.5 mm. 20 patient specific plan were generated and measured in 3 planes (distal, proximal and midSOBP) in Raystation. The average of gamma index is 98.7%±3% with minimum 94% Conclusions: Both TPS were successfully commissioned and can be safely deployed for clinical use for ProteusONE. Based on our clinical experience in PBS planning, user interface, function and workflow, we preferably use Raystation as our main clinical TPS. Gamma Index >95% at 3%/3 mm criteria is our institution action level for patient specific plan QAs.« less

  3. Density-functional approaches to noncovalent interactions: a comparison of dispersion corrections (DFT-D), exchange-hole dipole moment (XDM) theory, and specialized functionals.

    PubMed

    Burns, Lori A; Vázquez-Mayagoitia, Alvaro; Sumpter, Bobby G; Sherrill, C David

    2011-02-28

    A systematic study of techniques for treating noncovalent interactions within the computationally efficient density functional theory (DFT) framework is presented through comparison to benchmark-quality evaluations of binding strength compiled for molecular complexes of diverse size and nature. In particular, the efficacy of functionals deliberately crafted to encompass long-range forces, a posteriori DFT+dispersion corrections (DFT-D2 and DFT-D3), and exchange-hole dipole moment (XDM) theory is assessed against a large collection (469 energy points) of reference interaction energies at the CCSD(T) level of theory extrapolated to the estimated complete basis set limit. The established S22 [revised in J. Chem. Phys. 132, 144104 (2010)] and JSCH test sets of minimum-energy structures, as well as collections of dispersion-bound (NBC10) and hydrogen-bonded (HBC6) dissociation curves and a pairwise decomposition of a protein-ligand reaction site (HSG), comprise the chemical systems for this work. From evaluations of accuracy, consistency, and efficiency for PBE-D, BP86-D, B97-D, PBE0-D, B3LYP-D, B970-D, M05-2X, M06-2X, ωB97X-D, B2PLYP-D, XYG3, and B3LYP-XDM methodologies, it is concluded that distinct, often contrasting, groups of these elicit the best performance within the accessible double-ζ or robust triple-ζ basis set regimes and among hydrogen-bonded or dispersion-dominated complexes. For overall results, M05-2X, B97-D3, and B970-D2 yield superior values in conjunction with aug-cc-pVDZ, for a mean absolute deviation of 0.41 - 0.49 kcal/mol, and B3LYP-D3, B97-D3, ωB97X-D, and B2PLYP-D3 dominate with aug-cc-pVTZ, affording, together with XYG3/6-311+G(3df,2p), a mean absolute deviation of 0.33 - 0.38 kcal/mol.

  4. SU-E-T-133: Dosimetric Impact of Scan Orientation Relative to Target Motion During Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoker, J; Summers, P; Li, X

    2014-06-01

    Purpose: This study seeks to evaluate the dosimetric effects of intra-fraction motion during spot scanning proton beam therapy as a function of beam-scan orientation and target motion amplitude. Method: Multiple 4DCT scans were collected of a dynamic anthropomorphic phantom mimicking respiration amplitudes of 0 (static), 0.5, 1.0, and 1.5 cm. A spot-scanning treatment plan was developed on the maximum intensity projection image set, using an inverse-planning approach. Dynamic phantom motion was continuous throughout treatment plan delivery.The target nodule was designed to accommodate film and thermoluminescent dosimeters (TLD). Film and TLDs were uniquely labeled by location within the target. The phantommore » was localized on the treatment table using the clinically available orthogonal kV on-board imaging device. Film inserts provided data for dose uniformity; TLDs provided a 3% precision estimate of absolute dose. An inhouse script was developed to modify the delivery order of the beam spots, to orient the scanning direction parallel or perpendicular to target motion.TLD detector characterization and analysis was performed by the Imaging and Radiation Oncology Core group (IROC)-Houston. Film inserts, exhibiting a spatial resolution of 1mm, were analyzed to determine dose homogeneity within the radiation target. Results: Parallel scanning and target motions exhibited reduced target dose heterogeneity, relative to perpendicular scanning orientation. The average percent deviation in absolute dose for the motion deliveries relative to the static delivery was 4.9±1.1% for parallel scanning, and 11.7±3.5% (p<<0.05) for perpendicularly oriented scanning. Individual delivery dose deviations were not necessarily correlated to amplitude of motion for either scan orientation. Conclusions: Results demonstrate a quantifiable difference in dose heterogeneity as a function of scan orientation, more so than target amplitude. Comparison to the analyzed planar dose of a single field hint that multiple-field delivery alters intra-fraction beam-target motion synchronization and may mitigate heterogeneity, though further study is warranted.« less

  5. Microcirculation and its relation to continuous subcutaneous glucose sensor accuracy in cardiac surgery patients in the intensive care unit.

    PubMed

    Siegelaar, Sarah E; Barwari, Temo; Hermanides, Jeroen; van der Voort, Peter H J; Hoekstra, Joost B L; DeVries, J Hans

    2013-11-01

    Continuous glucose monitoring could be helpful for glucose regulation in critically ill patients; however, its accuracy is uncertain and might be influenced by microcirculation. We investigated the microcirculation and its relation to the accuracy of 2 continuous glucose monitoring devices in patients after cardiac surgery. The present prospective, observational study included 60 patients admitted for cardiac surgery. Two continuous glucose monitoring devices (Guardian Real-Time and FreeStyle Navigator) were placed before surgery. The relative absolute deviation between continuous glucose monitoring and the arterial reference glucose was calculated to assess the accuracy. Microcirculation was measured using the microvascular flow index, perfused vessel density, and proportion of perfused vessels using sublingual sidestream dark-field imaging, and tissue oxygenation using near-infrared spectroscopy. The associations were assessed using a linear mixed-effects model for repeated measures. The median relative absolute deviation of the Navigator was 11% (interquartile range, 8%-16%) and of the Guardian was 14% (interquartile range, 11%-18%; P = .05). Tissue oxygenation significantly increased during the intensive care unit admission (maximum 91.2% [3.9] after 6 hours) and decreased thereafter, stabilizing after 20 hours. A decrease in perfused vessel density accompanied the increase in tissue oxygenation. Microcirculatory variables were not associated with sensor accuracy. A lower peripheral temperature (Navigator, b = -0.008, P = .003; Guardian, b = -0.006, P = .048), and for the Navigator, also a higher Acute Physiology and Chronic Health Evaluation IV predicted mortality (b = 0.017, P < .001) and age (b = 0.002, P = .037) were associated with decreased sensor accuracy. The results of the present study have shown acceptable accuracy for both sensors in patients after cardiac surgery. The microcirculation was impaired to a limited extent compared with that in patients with sepsis and healthy controls. This impairment was not related to sensor accuracy but the peripheral temperature for both sensors and patient age and Acute Physiology and Chronic Health Evaluation IV predicted mortality for the Navigator were. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  6. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur across all of the terrestrial planets.

  7. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    PubMed Central

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  8. The 24 GHz measurements of 2.2 lambda conical horn antennas illuminating a conducting sheet

    NASA Technical Reports Server (NTRS)

    Cross, A. E.; Marshall, R. E.; Hearn, C. P.; Neece, R. T.

    1993-01-01

    Monostatic reflection-coefficient magnitude, absolute value of Gamma, measurements occurring between a radiating horn and a metal reflecting plate are presented for a family of three 2.2 lambda diameter conical horn antennas. The three horns have different aperture phase deviations: 6 deg, 22.5 deg, and 125 deg. Measurements of the magnitude of absolute value of Gamma as a function of horn-plate separation (d) extend from an effective antenna aperture short (d = O) to beyond the far-field boundary (d = 2D(sup 2)/lambda, where D is the antenna diameter). Measurement data are presented with various physical environments for each of the horns. Measured scalar data are compared with theoretical data from two models, a numerical model for a circular waveguide aperture in a ground plane and a scalar diffraction theory model. This work was conducted in support of the development effort for a spaceborne multifrequency microwave reflectometer designed to accurately determine the distance from a space vehicle's surface to a reflecting plasma boundary. The metal reflecting plate was used to simulate the RF reflectivity of a critically dense plasma. The resulting configuration, a ground plane mounted aperture facing a reflecting plane in close proximity, produces a strong interaction between the ground plane and the reflecting plate, especially at integral half-wavelength separations. The transition coefficient is characterized by large amplitude variations.

  9. Accuracy of noncycloplegic refraction performed at school screening camps.

    PubMed

    Khurana, Rolli; Tibrewal, Shailja; Ganesh, Suma; Tarkar, Rajoo; Nguyen, Phuong Thi Thanh; Siddiqui, Zeeshan; Dasgupta, Shantanu

    2018-06-01

    The aim of this study was to compare noncycloplegic refraction performed in school camp with that performed in eye clinic in children aged 6-16 years. A prospective study of children with unaided vision <0.2 LogMAR who underwent noncycloplegic retinoscopy (NCR) and subjective refraction (SR) in camp and subsequently in eye clinic between February and March 2017 was performed. A masked optometrist performed refractions in both settings. The agreement between refraction values obtained at both settings was compared using the Bland-Altman analysis. A total of 217 eyes were included in this study. Between the school camp and eye clinic, the mean absolute error ± standard deviation in spherical equivalent (SE) of NCR was 0.33 ± 0.4D and that of SR was 0.26 ± 0.5D. The limits of agreement for NCR were +0.91D to - 1.09D and for SR was +1.15D to -1.06D. The mean absolute error in SE was ≤0.5D in 92.62% eyes (95% confidence interval 88%-95%). A certain degree of variability exists between noncycloplegic refraction done in school camps and eye clinic. It was found to be accurate within 0.5D of SE in 92.62% eyes for refractive errors up to 4.5D of myopia, 3D of cylinder, and 1.5D of hyperopia.

  10. Relationship between postoperative refractive outcomes and cataract density: multiple regression analysis.

    PubMed

    Ueda, Tetsuo; Ikeda, Hitoe; Ota, Takeo; Matsuura, Toyoaki; Hara, Yoshiaki

    2010-05-01

    To evaluate the relationship between cataract density and the deviation from the predicted refraction. Department of Ophthalmology, Nara Medical University, Kashihara, Japan. Axial length (AL) was measured in eyes with mainly nuclear cataract using partial coherence interferometry (IOLMaster). The postoperative AL was measured in pseudophakic mode. The AL difference was calculated by subtracting the postoperative AL from the preoperative AL. Cataract density was measured with the pupil dilated using anterior segment Scheimpflug imaging (EAS-1000). The predicted postoperative refraction was calculated using the SRK/T formula. The subjective refraction 3 months postoperatively was also measured. The mean absolute prediction error (MAE) (mean of absolute difference between predicted postoperative refraction and spherical equivalent of postoperative subjective refraction) was calculated. The relationship between the MAE and cataract density, age, preoperative visual acuity, anterior chamber depth, corneal radius of curvature, and AL difference was evaluated using multiple regression analysis. In the 96 eyes evaluated, the MAE was correlated with cataract density (r = 0.37, P = .001) and the AL difference (r = 0.34, P = .003) but not with the other parameters. The AL difference was correlated with cataract density (r = 0.53, P<.0001). The postoperative refractive outcome was affected by cataract density. This should be taken into consideration in eyes with a higher density cataract. (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. An experimental loop design for the detection of constitutional chromosomal aberrations by array CGH

    PubMed Central

    2009-01-01

    Background Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. Results We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. Conclusion The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies. PMID:19925645

  12. In situ nanoscale observations of gypsum dissolution by digital holographic microscopy.

    PubMed

    Feng, Pan; Brand, Alexander S; Chen, Lei; Bullard, Jeffrey W

    2017-06-01

    Recent topography measurements of gypsum dissolution have not reported the absolute dissolution rates, but instead focus on the rates of formation and growth of etch pits. In this study, the in situ absolute retreat rates of gypsum (010) cleavage surfaces at etch pits, at cleavage steps, and at apparently defect-free portions of the surface are measured in flowing water by reflection digital holographic microscopy. Observations made on randomly sampled fields of view on seven different cleavage surfaces reveal a range of local dissolution rates, the local rate being determined by the topographical features at which material is removed. Four characteristic types of topographical activity are observed: 1) smooth regions, free of etch pits or other noticeable defects, where dissolution rates are relatively low; 2) shallow, wide etch pits bounded by faceted walls which grow gradually at rates somewhat greater than in smooth regions; 3) narrow, deep etch pits which form and grow throughout the observation period at rates that exceed those at the shallow etch pits; and 4) relatively few, submicrometer cleavage steps which move in a wave-like manner and yield local dissolution fluxes that are about five times greater than at etch pits. Molar dissolution rates at all topographical features except submicrometer steps can be aggregated into a continuous, mildly bimodal distribution with a mean of 3.0 µmolm -2 s -1 and a standard deviation of 0.7 µmolm -2 s -1 .

  13. AQuA: An Automated Quantification Algorithm for High-Throughput NMR-Based Metabolomics and Its Application in Human Plasma.

    PubMed

    Röhnisch, Hanna E; Eriksson, Jan; Müllner, Elisabeth; Agback, Peter; Sandström, Corine; Moazzami, Ali A

    2018-02-06

    A key limiting step for high-throughput NMR-based metabolomics is the lack of rapid and accurate tools for absolute quantification of many metabolites. We developed, implemented, and evaluated an algorithm, AQuA (Automated Quantification Algorithm), for targeted metabolite quantification from complex 1 H NMR spectra. AQuA operates based on spectral data extracted from a library consisting of one standard calibration spectrum for each metabolite. It uses one preselected NMR signal per metabolite for determining absolute concentrations and does so by effectively accounting for interferences caused by other metabolites. AQuA was implemented and evaluated using experimental NMR spectra from human plasma. The accuracy of AQuA was tested and confirmed in comparison with a manual spectral fitting approach using the ChenomX software, in which 61 out of 67 metabolites quantified in 30 human plasma spectra showed a goodness-of-fit (r 2 ) close to or exceeding 0.9 between the two approaches. In addition, three quality indicators generated by AQuA, namely, occurrence, interference, and positional deviation, were studied. These quality indicators permit evaluation of the results each time the algorithm is operated. The efficiency was tested and confirmed by implementing AQuA for quantification of 67 metabolites in a large data set comprising 1342 experimental spectra from human plasma, in which the whole computation took less than 1 s.

  14. A Simulation-Based Study on the Comparison of Statistical and Time Series Forecasting Methods for Early Detection of Infectious Disease Outbreaks.

    PubMed

    Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho

    2018-05-11

    Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.

  15. Mapping health outcome measures from a stroke registry to EQ-5D weights.

    PubMed

    Ghatnekar, Ola; Eriksson, Marie; Glader, Eva-Lotta

    2013-03-07

    To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n=272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for "dressing", "toileting", "mobility", "mood", "general health" and "proxy-responders" were applied to the validation set (n=272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥ 0.5 (P<0.01). This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility.

  16. Multi-parameter Nonlinear Gain Correction of X-ray Transition Edge Sensors for the X-ray Integral Field Unit

    NASA Astrophysics Data System (ADS)

    Cucchetti, E.; Eckart, M. E.; Peille, P.; Porter, F. S.; Pajot, F.; Pointecouteau, E.

    2018-04-01

    With its array of 3840 Transition Edge Sensors (TESs), the Athena X-ray Integral Field Unit (X-IFU) will provide spatially resolved high-resolution spectroscopy (2.5 eV up to 7 keV) from 0.2 to 12 keV, with an absolute energy scale accuracy of 0.4 eV. Slight changes in the TES operating environment can cause significant variations in its energy response function, which may result in systematic errors in the absolute energy scale. We plan to monitor such changes at pixel level via onboard X-ray calibration sources and correct the energy scale accordingly using a linear or quadratic interpolation of gain curves obtained during ground calibration. However, this may not be sufficient to meet the 0.4 eV accuracy required for the X-IFU. In this contribution, we introduce a new two-parameter gain correction technique, based on both the pulse-height estimate of a fiducial line and the baseline value of the pixels. Using gain functions that simulate ground calibration data, we show that this technique can accurately correct deviations in detector gain due to changes in TES operating conditions such as heat sink temperature, bias voltage, thermal radiation loading and linear amplifier gain. We also address potential optimisations of the onboard calibration source and compare the performance of this new technique with those previously used.

  17. In Situ Analyses of Methane Oxidation Associated with the Roots and Rhizomes of a Bur Reed, Sparganium Eurycarpum, in a Maine Wetland

    NASA Technical Reports Server (NTRS)

    King, Gary M.

    1996-01-01

    Methane oxidation associated with the belowground tissues of a common aquatic macrophyte, the burweed Sparganium euryearpum, was assayed in situ by a chamber technique with acetylene or methyl fluoride as a methanotrophic inhibitor at a headspace concentration of 3 to 4%. Acetylene and methyl fluoride inhibited both methane oxidation and peat methanogenesis. However, inhibition of methanogenesis resulted in no obvious short-term effect on methane fluxes. Since neither inhibitor adversely affected plant metabolism and both inhibited methanotrophy equally well, acetylene was employed for routine assays because of its low cost and ease of use. Root-associated methanotrophy consumed a variable but significant fraction of the total potential methane flux; values varied between 1 and 58% (mean +/- standard deviation, 27.0% +/- 6.0%), with no consistent temporal or spatial pattern during late summer. The absolute amount of methane oxidized was not correlated with the total potential methane flux; this suggested that parameters other than methane availability (e.g., oxygen availability) controlled the rates of methane oxidation. Estimates of diffusive methane flux and oxidation at the peat surface indicated that methane emission occurred primarily through aboveground plant tissues; the absolute magnitude of methane oxidation was also greater in association with roots than at the peat surface. However, the relative extent of oxidation was greater at the latter locus.

  18. Ultrasound virtual endoscopy: Polyp detection and reliability of measurement in an in vitro study with pig intestine specimens

    PubMed Central

    Liu, Jin-Ya; Chen, Li-Da; Cai, Hua-Song; Liang, Jin-Yu; Xu, Ming; Huang, Yang; Li, Wei; Feng, Shi-Ting; Xie, Xiao-Yan; Lu, Ming-De; Wang, Wei

    2016-01-01

    AIM: To present our initial experience regarding the feasibility of ultrasound virtual endoscopy (USVE) and its measurement reliability for polyp detection in an in vitro study using pig intestine specimens. METHODS: Six porcine intestine specimens containing 30 synthetic polyps underwent USVE, computed tomography colonography (CTC) and optical colonoscopy (OC) for polyp detection. The polyp measurement defined as the maximum polyp diameter on two-dimensional (2D) multiplanar reformatted (MPR) planes was obtained by USVE, and the absolute measurement error was analyzed using the direct measurement as the reference standard. RESULTS: USVE detected 29 (96.7%) of 30 polyps, remaining a 7-mm one missed. There was one false-positive finding. Twenty-six (89.7%) of 29 reconstructed images were clearly depicted, while 29 (96.7%) of 30 polyps were displayed on CTC with one false-negative finding. In OC, all the polyps were detected. The intraclass correlation coefficient was 0.876 (95%CI: 0.745-0.940) for measurements obtained with USVE. The pooled absolute measurement errors ± the standard deviations of the depicted polyps with actual sizes ≤ 5 mm, 6-9 mm, and ≥ 10 mm were 1.9 ± 0.8 mm, 0.9 ± 1.2 mm, and 1.0 ± 1.4 mm, respectively. CONCLUSION: USVE is reliable for polyp detection and measurement in in vitro study. PMID:27022217

  19. An experimental loop design for the detection of constitutional chromosomal aberrations by array CGH.

    PubMed

    Allemeersch, Joke; Van Vooren, Steven; Hannes, Femke; De Moor, Bart; Vermeesch, Joris Robert; Moreau, Yves

    2009-11-19

    Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies.

  20. A change in paradigm: lowering blood pressure in everyone over a certain age.

    PubMed

    Law, Malcolm

    2012-06-01

    Dividing people into 'hypertensives' and 'normotensives' is commonplace but problematic. The relationship between blood pressure and cardiovascular disease is continuous. The Prospective Studies Collaboration analysis shows a continuous straight line dose-response relationship across the entire population down to blood pressure levels of 115 mmHg systolic and 75 mmHg diastolic, the confidence limits on the individual data points being sufficiently narrow to exclude even a minor deviation from a linear relationship. Meta-analysis of randomized controlled trials shows that blood pressure-lowering drugs produce similar proportional reductions in risk of coronary heart disease (CHD) and stroke irrespective of pre-treatment blood pressure, down to levels of 110 mmHg systolic and 70 mmHg diastolic. There are also now sufficient trial data to show a statistically significant risk reduction in 'normotensive' people without known vascular disease on entry. The straight line (log-linear) relationship means that the benefit derived from lowering blood pressure is proportional to existing risk, so the decision on whom to treat with blood pressure-lowering drugs should depend on a person's overall absolute risk irrespective of blood pressure. In primary prevention, basing treatment on age alone rather than overall absolute risk entails little loss of efficacy and may be preferred on the basis of simplicity and avoidance of anxiety in telling people they are at elevated risk.

  1. The sociogeometry of inequality: Part II

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2015-05-01

    The study of socioeconomic inequality is of prime economic and social importance, and the key quantitative gauges of socioeconomic inequality are Lorenz curves and inequality indices - the most notable of the latter being the popular Gini index. In this series of papers we present a sociogeometric framework to the study of socioeconomic inequality. In this part we focus on the gap between the rich and the poor, which is quantified by gauges termed disparity curves. We shift from disparity curves to disparity sets, define inequality indices in terms of disparity sets, and introduce and explore a collection of distance-based and width-based inequality indices stemming from the geometry of disparity sets. We conclude with mean-absolute-deviation (MAD) representations of the inequality indices established in this series of papers, and with a comparison of these indices to the popular Gini index.

  2. A Multiparameter Thermal Conductivity Equation for 1,1-Difluoroethane (R152a) with an Optimized Functional Form

    NASA Astrophysics Data System (ADS)

    Scalabrin, G.; Marchi, P.; Finezzo, F.

    2006-11-01

    The application of an optimization technique to the available experimental data has led to the development of a new multiparameter equation λ = λ ( T,ρ ) for the representation of the thermal conductivity of 1,1-difluoroethane (R152a). The region of validity of the proposed equation covers the temperature range from 220 to 460 K and pressures up to 55 MPa, including the near-critical region. The average absolute deviation of the equation with respect to the selected 939 primary data points is 1.32%. The proposed equation represents therefore a significant improvement with respect to the literature conventional equation. The density value required by the equation is calculated at the chosen temperature and pressure conditions using a high accuracy equation of state for the fluid.

  3. Review of the BACKONE equation of state and its applications

    NASA Astrophysics Data System (ADS)

    Lai, Ngoc Anh; Phan, Thi Thu Huong

    2017-06-01

    This paper presents a review of the BACKONE equation of state (EOS) and its various applications in the study of pure fluid and mixtures as refrigerants, working fluids, natural gases and the study of heat pumps, refrigeration cycles, organic Rankine cycles, trilateral cycles and power flash cycles. It also presents an accurate parameterisation of the BACKONE EOS for the low global warming potential working fluid 3,3,3-trifluoropropene (HFO-1243zf). The average absolute deviations (AAD) between experimental vapour pressure and saturated liquid density data from those of the BACKONE EOS are 0.12% and 0.08%, respectively. The BACKONE EOS for HFO-1243zf also predicts thermodynamic data accurately. The AAD between the BACKONE predicted values and experimental data are 0.20% for sub-cooled liquid density and 0.56% for gaseous pressure.

  4. Scientific results from the Cosmic Background Explorer (COBE)

    PubMed Central

    Bennett, C. L.; Boggess, N. W.; Cheng, E. S.; Hauser, M. G.; Kelsall, T.; Mather, J. C.; Moseley, S. H.; Murdock, T. L.; Shafer, R. A.; Silverberg, R. F.; Smoot, G. F.; Weiss, R.; Wright, E. L.

    1993-01-01

    The National Aeronautics and Space Administration (NASA) has flown the COBE satellite to observe the Big Bang and the subsequent formation of galaxies and large-scale structure. Data from the Far-Infrared Absolute Spectrophotometer (FIRAS) show that the spectrum of the cosmic microwave background is that of a black body of temperature T = 2.73 ± 0.06 K, with no deviation from a black-body spectrum greater than 0.25% of the peak brightness. The data from the Differential Microwave Radiometers (DMR) show statistically significant cosmic microwave background anisotropy, consistent with a scale-invariant primordial density fluctuation spectrum. Measurements from the Diffuse Infrared Background Experiment (DIRBE) provide new conservative upper limits to the cosmic infrared background. Extensive modeling of solar system and galactic infrared foregrounds is required for further improvement in the cosmic infrared background limits. PMID:11607383

  5. Photodisintegration cross section of the reaction 4He(γ,n)3He at the giant dipole resonance peak

    NASA Astrophysics Data System (ADS)

    Tornow, W.; Kelley, J. H.; Raut, R.; Rusev, G.; Tonchev, A. P.; Ahmed, M. W.; Crowell, A. S.; Stave, S. C.

    2012-06-01

    The photodisintegration cross section of 4He into a neutron and helion was measured at incident photon energies of 27.0, 27.5, and 28.0 MeV. A high-pressure 4He-Xe gas scintillator served as target and detector while a pure Xe gas scintillator was used for background measurements. A NaI detector in combination with the standard HIγS scintillator paddle system was employed for absolute photon-flux determination. Our data are in good agreement with the theoretical prediction of the Trento group and the recent data of Nilsson [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.75.014007 75, 014007 (2007)] but deviate considerably from the high-precision data of Shima [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.72.044004 72, 044004 (2005)].

  6. Corresponding state-based correlations for the temperature-dependent surface tension of saturated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Tian, Jianxiang; Zhang, Cuihua; Zhang, Laibin; Zheng, Mengmeng; Liu, Shuzhen

    2017-10-01

    Based on the recent progresses on the corresponding state-based correlations for the temperature-dependent surface tension of saturated fluids [I. Cachadiña, A. Mulero and J. X. Tian, Fluid Phase Equilibr. 442 (2017) 68; J. X. Tian, M. M. Zheng, H. L. Yi, L. B. Zhang and S. Z. Liu, Mod. Phys. Lett. B 31 (2017) 1750110], we proposed a new correlation for saturated hydrocarbons. This correlation includes three fluid-independent parameters and inquires the critical temperature, the triple-point temperature and the surface tension at the triple-point temperature as inputs for each hydrocarbon. Results show that this correlation can reproduce NIST data with absolute average deviation (AAD) less than 1% for 10 out of 19 hydrocarbons and AAD less than 5% for 17 out of 19 hydrocarbons, clearly better than other correlations.

  7. The integration of FPGA TDC inside White Rabbit node

    NASA Astrophysics Data System (ADS)

    Li, H.; Xue, T.; Gong, G.; Li, J.

    2017-04-01

    White Rabbit technology is capable of delivering sub-nanosecond accuracy and picosecond precision of synchronization and normal data packets over the fiber network. Carry chain structure in FPGA is a popular way to build TDC and tens of picosecond RMS resolution has been achieved. The integration of WR technology with FPGA TDC can enhance and simplify the TDC in many aspects that includes providing a low jitter clock for TDC, a synchronized absolute UTC/TAI timestamp for coarse counter, a fancy way to calibrate the carry chain DNL and an easy to use Ethernet link for data and control information transmit. This paper presents a FPGA TDC implemented inside a normal White Rabbit node with sub-nanosecond measurement precision. The measured standard deviation reaches 50ps between two distributed TDCs. Possible applications of this distributed TDC are also discussed.

  8. Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli

    PubMed Central

    Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy

    2016-01-01

    We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966

  9. An FEM-based AI approach to model parameter identification for low vibration modes of wind turbine composite rotor blades

    NASA Astrophysics Data System (ADS)

    Navadeh, N.; Goroshko, I. O.; Zhuk, Y. A.; Fallah, A. S.

    2017-11-01

    An approach to construction of a beam-type simplified model of a horizontal axis wind turbine composite blade based on the finite element method is proposed. The model allows effective and accurate description of low vibration bending modes taking into account the effects of coupling between flapwise and lead-lag modes of vibration transpiring due to the non-uniform distribution of twist angle in the blade geometry along its length. The identification of model parameters is carried out on the basis of modal data obtained by more detailed finite element simulations and subsequent adoption of the 'DIRECT' optimisation algorithm. Stable identification results were obtained using absolute deviations in frequencies and in modal displacements in the objective function and additional a priori information (boundedness and monotony) on the solution properties.

  10. The research of a solution on locating optimally a station for seismic disasters rescue in a city

    NASA Astrophysics Data System (ADS)

    Yao, Qing-Lin

    1995-02-01

    When the stations for seismic disasters rescue in future or the similars are designed on a network of communication line, the general absolute center of a graph needs to be solved to reduce the requirements in the number of stations and running parameters and to establish an optimal station in a sense distribution of the rescue arrival time by the way of locating optimally the stations. The existing solution on this problem was proposed by Edward (1978) in which, however, there is serious deviation. In this article, the work of Edward (1978) is developed in both formula and figure, more correct solution is proposed and proved. Then the result from the newer solution is contrasted with that from the older one in a instance about locating optimally the station for seismic disasters rescue.

  11. Base Flow and Heat Transfer Characteristics of a Four-Nozzle Clustered Rocket Engine: Effect of Nozzle Pressure Ratio

    NASA Technical Reports Server (NTRS)

    Nallasamy, R.; Kandula, M.; Duncil, L.; Schallhorn, P.

    2010-01-01

    The base pressure and heating characteristics of a four-nozzle clustered rocket configuration is studied numerically with the aid of OVERFLOW Navier-Stokes code. A pressure ratio (chamber pressure to freestream static pressure) range of 990 to 5,920 and a freestream Mach number range of 2.5 to 3.5 are studied. The qualitative trends of decreasing base pressure with increasing pressure ratio and increasing base heat flux with increasing pressure ratio are correctly predicted. However, the predictions for base pressure and base heat flux show deviations from the wind tunnel data. The differences in absolute values between the computation and the data are attributed to factors such as perfect gas (thermally and calorically perfect) assumption, turbulence model inaccuracies in the simulation, and lack of grid adaptation.

  12. Machine-Specific Magnetic Resonance Imaging Quality Control Procedures for Stereotactic Radiosurgery Treatment Planning

    PubMed Central

    Taghizadeh, Somayeh; Yang, Claus Chunli; R. Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan

    2017-01-01

    Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID3D and Quasar GRID3D phantoms were used to evaluate the effects of static magnetic field (B0) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions. PMID:29487771

  13. Machine-Specific Magnetic Resonance Imaging Quality Control Procedures for Stereotactic Radiosurgery Treatment Planning.

    PubMed

    Fatemi, Ali; Taghizadeh, Somayeh; Yang, Claus Chunli; R Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan

    2017-12-18

    Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID 3D and Quasar GRID 3D phantoms were used to evaluate the effects of static magnetic field (B 0 ) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions.

  14. On-line high-performance liquid chromatography-ultraviolet-nuclear magnetic resonance method of the markers of nerve agents for verification of the Chemical Weapons Convention.

    PubMed

    Mazumder, Avik; Gupta, Hemendra K; Garg, Prabhat; Jain, Rajeev; Dubey, Devendra K

    2009-07-03

    This paper details an on-flow liquid chromatography-ultraviolet-nuclear magnetic resonance (LC-UV-NMR) method for the retrospective detection and identification of alkyl alkylphosphonic acids (AAPAs) and alkylphosphonic acids (APAs), the markers of the toxic nerve agents for verification of the Chemical Weapons Convention (CWC). Initially, the LC-UV-NMR parameters were optimized for benzyl derivatives of the APAs and AAPAs. The optimized parameters include stationary phase C(18), mobile phase methanol:water 78:22 (v/v), UV detection at 268nm and (1)H NMR acquisition conditions. The protocol described herein allowed the detection of analytes through acquisition of high quality NMR spectra from the aqueous solution of the APAs and AAPAs with high concentrations of interfering background chemicals which have been removed by preceding sample preparation. The reported standard deviation for the quantification is related to the UV detector which showed relative standard deviations (RSDs) for quantification within +/-1.1%, while lower limit of detection upto 16mug (in mug absolute) for the NMR detector. Finally the developed LC-UV-NMR method was applied to identify the APAs and AAPAs in real water samples, consequent to solid phase extraction and derivatization. The method is fast (total experiment time approximately 2h), sensitive, rugged and efficient.

  15. Statistical optimization of the phytoremediation of arsenic by Ludwigia octovalvis- in a pilot reed bed using response surface methodology (RSM) versus an artificial neural network (ANN).

    PubMed

    Titah, Harmin Sulistiyaning; Halmi, Mohd Izuan Effendi Bin; Abdullah, Siti Rozaimah Sheikh; Hasan, Hassimi Abu; Idris, Mushrifah; Anuar, Nurina

    2018-06-07

    In this study, the removal of arsenic (As) by plant, Ludwigia octovalvis, in a pilot reed bed was optimized. A Box-Behnken design was employed including a comparative analysis of both Response Surface Methodology (RSM) and an Artificial Neural Network (ANN) for the prediction of maximum arsenic removal. The predicted optimum condition using the desirability function of both models was 39 mg kg -1 for the arsenic concentration in soil, an elapsed time of 42 days (the sampling day) and an aeration rate of 0.22 L/min, with the predicted values of arsenic removal by RSM and ANN being 72.6% and 71.4%, respectively. The validation of the predicted optimum point showed an actual arsenic removal of 70.6%. This was achieved with the deviation between the validation value and the predicted values being within 3.49% (RSM) and 1.87% (ANN). The performance evaluation of the RSM and ANN models showed that ANN performs better than RSM with a higher R 2 (0.97) close to 1.0 and very small Average Absolute Deviation (AAD) (0.02) and Root Mean Square Error (RMSE) (0.004) values close to zero. Both models were appropriate for the optimization of arsenic removal with ANN demonstrating significantly higher predictive and fitting ability than RSM.

  16. Judging in Rhythmic Gymnastics at Different Levels of Performance.

    PubMed

    Leandro, Catarina; Ávila-Carvalho, Lurdes; Sierra-Palmeiro, Elena; Bobo-Arce, Marta

    2017-12-01

    This study aimed to analyse the quality of difficulty judging in rhythmic gymnastics, at different levels of performance. The sample consisted of 1152 difficulty scores concerning 288 individual routines, performed in the World Championships in 2013. The data were analysed using the mean absolute judge deviation from the final difficulty score, a Cronbach's alpha coefficient and intra-class correlations, for consistency and reliability assessment. For validity assessment, mean deviations of judges' difficulty scores, the Kendall's coefficient of concordance W and ANOVA eta-squared values were calculated. Overall, the results in terms of consistency (Cronbach's alpha mostly above 0.90) and reliability (intra-class correlations for single and average measures above 0.70 and 0.90, respectively) were satisfactory, in the first and third parts of the ranking on all apparatus. The medium level gymnasts, those in the second part of the ranking, had inferior reliability indices and highest score dispersion. In this part, the minimum of corrected item-total correlation of individual judges was 0.55, with most values well below, and the matrix for between-judge correlations identified remarkable inferior correlations. These findings suggest that the quality of difficulty judging in rhythmic gymnastics may be compromised at certain levels of performance. In future, special attention should be paid to the judging analysis of the medium level gymnasts, as well as the Code of Points applicability at this level.

  17. The influence of chemical composition of LNG on the supercritical heat transfer in an intermediate fluid vaporizer

    NASA Astrophysics Data System (ADS)

    Xu, Shuangqing; Chen, Xuedong; Fan, Zhichao; Chen, Yongdong; Nie, Defu; Wu, Qiaoguo

    2018-04-01

    A three-dimensional transient computational fluid dynamics (CFD) model has been established for the simulations of supercritical heat transfer of real liquefied natural gas (LNG) mixture in a single tube and a tube bundle of an intermediate fluid vaporizer (IFV). The influence of chemical composition of LNG on the thermal performance has been analyzed. The results have also been compared with those obtained from the one-dimensional steady-state calculations using the distributed parameter model (DPM). It is found that the current DPM approach can give reasonable prediction accuracy for the thermal performance in the tube bundle but unsatisfactory prediction accuracy for that in a single tube as compared with the corresponding CFD data. As benchmarked against pure methane, the vaporization of an LNG containing about 90% (mole fraction) of methane would lead to an absolute deviation of 5.5 K in the outlet NG temperature and a maximum relative deviation of 11.4% in the tube side HTC in a bundle of about 816 U tubes at the inlet pressure of 12 MPa and mass flux of 200 kg·m-2·s-1. It is concluded that the influence of LNG composition on the thermal performance should be taken into consideration in order to obtain an economic and reliable design of an IFV.

  18. Judging in Rhythmic Gymnastics at Different Levels of Performance

    PubMed Central

    Ávila-Carvalho, Lurdes; Sierra-Palmeiro, Elena; Bobo-Arce, Marta

    2017-01-01

    Abstract This study aimed to analyse the quality of difficulty judging in rhythmic gymnastics, at different levels of performance. The sample consisted of 1152 difficulty scores concerning 288 individual routines, performed in the World Championships in 2013. The data were analysed using the mean absolute judge deviation from the final difficulty score, a Cronbach’s alpha coefficient and intra-class correlations, for consistency and reliability assessment. For validity assessment, mean deviations of judges’ difficulty scores, the Kendall’s coefficient of concordance W and ANOVA eta-squared values were calculated. Overall, the results in terms of consistency (Cronbach’s alpha mostly above 0.90) and reliability (intra-class correlations for single and average measures above 0.70 and 0.90, respectively) were satisfactory, in the first and third parts of the ranking on all apparatus. The medium level gymnasts, those in the second part of the ranking, had inferior reliability indices and highest score dispersion. In this part, the minimum of corrected item-total correlation of individual judges was 0.55, with most values well below, and the matrix for between-judge correlations identified remarkable inferior correlations. These findings suggest that the quality of difficulty judging in rhythmic gymnastics may be compromised at certain levels of performance. In future, special attention should be paid to the judging analysis of the medium level gymnasts, as well as the Code of Points applicability at this level. PMID:29339996

  19. Role of gravity-based information on the orientation and localization of the perceived body midline.

    PubMed

    Ceyte, Hadrien; Cian, Corinne; Nougier, Vincent; Olivier, Isabelle; Trousselard, Marion

    2007-01-01

    The present study focused on the influence of gravity-based information on the orientation and localization of the perceived body midline. The orientation was investigated by the rolling adjustment of a rod on the subjects' Z-axis and the localization by the horizontal adjustment of a visual dot as being straight ahead. Experiment 1 investigated the effect of the dissociation between the Z-axis and the direction of gravity by placing subjects in roll tilt and supine postures. In roll tilt, the perception of the body midline orientation was deviated in the direction of body tilt and the perception of its localization was deviated in the opposite direction. In the supine body orientation, estimates of the Z-axis and straight-ahead remained veridical as when the body was upright. Experiment 2 highlighted the relative importance of the otolithic and tactile information using diffuse pressure stimulation. The estimation of body midline orientation was modified contrarily to the estimation of its localization. Thus, subjects had no absolute representation of their egocentric space. The main hypothesis regarding the dissociation between the orientation and localization of the body midline may be related to a difference in the integration of sensory information. It can be suggested that the horizontal component of the vestibulo-ocular reflex (VOR) contributed to the perceived localization of the body midline, whereas its orientation was mainly influenced by tactile information.

  20. Results of clinical olfactometric studies.

    PubMed

    Kittel, G

    1976-09-01

    A modification of a flow olfactometer with a new application appartus, with which "quasi-free" nasal respiration allows the elimination of adaptation without a special testing room, subsequent results using this device to examine olfactory thresholds before and after septum operations, as well as reference to threshold increases in 57 post-operative cases of cheilognathopalatoschisis are reported. An esthesio-neuroblastoma as well as the deformity syndrome with cheilognathopalatoschisis and encephalodystrophy are used as examples for combined olfactory transmission and perception disorders. Studies of 55 smokers with primary neurosensory disorders demonstrated a threefold increase in the olfactory threshold and an up to 50% decrease "fatique-time". A mean acetone deviation factor of 1.93 was seen in 100 students from 20-27 years of age before and after eating. Correspondingly, after a substantial breakfast and lunch, the olfactory threshold attained its maximum daily value within 90 minutes, much more pronounced than after intake of 80 grams of glucose solution. In contrast to the literature, the olfactory threshold was seen to continuously increase, dependent on age. Studies of the perceptive and recognition threshold on 100 normal individuals and 28 patients with hyposmia exhibited with 3 sigma, a significant difference. In patients with hyposmia, the absolute values for the two threshold types vary greatly, however not their deviation factors. More importance should be attached to the sense of smell as the so-called lesser senses give us the greatest pleasures.

  1. Error analysis regarding the calculation of nonlinear force-free field

    NASA Astrophysics Data System (ADS)

    Liu, S.; Zhang, H. Q.; Su, J. T.

    2012-02-01

    Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.

  2. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)].

    PubMed

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-07

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  3. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  4. Validation of the CrIS fast physical NH3 retrieval with ground-based FTIR

    NASA Astrophysics Data System (ADS)

    Dammers, Enrico; Shephard, Mark W.; Palm, Mathias; Cady-Pereira, Karen; Capps, Shannon; Lutsch, Erik; Strong, Kim; Hannigan, James W.; Ortega, Ivan; Toon, Geoffrey C.; Stremme, Wolfgang; Grutter, Michel; Jones, Nicholas; Smale, Dan; Siemons, Jacob; Hrpcek, Kevin; Tremblay, Denis; Schaap, Martijn; Notholt, Justus; Erisman, Jan Willem

    2017-07-01

    Presented here is the validation of the CrIS (Cross-track Infrared Sounder) fast physical NH3 retrieval (CFPR) column and profile measurements using ground-based Fourier transform infrared (FTIR) observations. We use the total columns and profiles from seven FTIR sites in the Network for the Detection of Atmospheric Composition Change (NDACC) to validate the satellite data products. The overall FTIR and CrIS total columns have a positive correlation of r = 0.77 (N = 218) with very little bias (a slope of 1.02). Binning the comparisons by total column amounts, for concentrations larger than 1.0 × 1016 molecules cm-2, i.e. ranging from moderate to polluted conditions, the relative difference is on average ˜ 0-5 % with a standard deviation of 25-50 %, which is comparable to the estimated retrieval uncertainties in both CrIS and the FTIR. For the smallest total column range (< 1.0 × 1016 molecules cm-2) where there are a large number of observations at or near the CrIS noise level (detection limit) the absolute differences between CrIS and the FTIR total columns show a slight positive column bias. The CrIS and FTIR profile comparison differences are mostly within the range of the single-level retrieved profile values from estimated retrieval uncertainties, showing average differences in the range of ˜ 20 to 40 %. The CrIS retrievals typically show good vertical sensitivity down into the boundary layer which typically peaks at ˜ 850 hPa (˜ 1.5 km). At this level the median absolute difference is 0.87 (std = ±0.08) ppb, corresponding to a median relative difference of 39 % (std = ±2 %). Most of the absolute and relative profile comparison differences are in the range of the estimated retrieval uncertainties. At the surface, where CrIS typically has lower sensitivity, it tends to overestimate in low-concentration conditions and underestimate in higher atmospheric concentration conditions.

  5. Forecast models for suicide: Time-series analysis with data from Italy.

    PubMed

    Preti, Antonio; Lentini, Gianluca

    2016-01-01

    The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.

  6. Excited atoms in the free-burning Ar arc: treatment of the resonance radiation

    NASA Astrophysics Data System (ADS)

    Golubovskii, Yu; Kalanov, D.; Gortschakow, S.; Baeva, M.; Uhrlandt, D.

    2016-11-01

    The collisional-radiative model with an emphasis on the accurate treatment of the resonance radiation transport is developed and applied to the free-burning Ar arc plasma. This model allows for analysis of the influence of resonance radiation on the spatial density profiles of the atoms in different excited states. The comparison of the radial density profiles obtained using an effective transition probability approximation with the results of the accurate solution demonstrates the distinct impact of transport on the profiles and absolute densities of the excited atoms, especially in the arc fringes. The departures from the Saha-Boltzmann equilibrium distributions, caused by different radiative transitions, are analyzed. For the case of the DC arc, the local thermodynamic equilibrium (LTE) state holds close to the arc axis, while strong deviations from the equilibrium state on the periphery occur. In the intermediate radial positions the conditions of partial LTE are fulfilled.

  7. Evaluation of the plasma hydrogen isotope content by residual gas analysis at JET and AUG

    NASA Astrophysics Data System (ADS)

    Drenik, A.; Alegre, D.; Brezinsek, S.; De Castro, A.; Kruezi, U.; Oberkofler, M.; Panjan, M.; Primc, G.; Reichbauer, T.; Resnik, M.; Rohde, V.; Seibt, M.; Schneider, P. A.; Wauters, T.; Zaplotnik, R.; ASDEX-Upgrade, the; EUROfusion MST1 Teams; contributors, JET

    2017-12-01

    The isotope content of the plasma reflects on the dynamics of isotope changeover experiments, efficiency of wall conditioning and the performance of a fusion device in the active phase of operation. The assessment of the isotope ratio of hydrogen and methane molecules is used as a novel method of assessing the plasma isotope ratios at JET and ASDEX-Upgrade (AUG). The isotope ratios of both molecules in general shows similar trends as the isotope ratio detected by other diagnostics. At JET, the absolute values of RGA signals are in relatively good agreement with each other and with spectroscopy data, while at AUG the deviation from neutral particle analyser data are larger, and the results show a consistent spatial distribution of the isotope ratio. It is further shown that the isotope ratio of the hydrogen molecule can be used to study the degree of dissociation of the injected gas during changeover experiments.

  8. New generalized corresponding states correlation for surface tension of normal saturated liquids

    NASA Astrophysics Data System (ADS)

    Yi, Huili; Tian, Jianxiang

    2015-08-01

    A new simple correlation based on the principle of corresponding state is proposed to estimate the temperature-dependent surface tension of normal saturated liquids. The new correlation contains three coefficients obtained by fitting 17,051 surface tension data of 38 saturated normal liquids. These 38 liquids contain refrigerants, hydrocarbons and some other inorganic liquids. The new correlation requires only the triple point temperature, triple point surface tension and critical point temperature as input and is able to well represent the experimental surface tension data for each of the 38 saturated normal liquids from the triple temperature up to the point near the critical point. The new correlation gives absolute average deviations (AAD) values below 3% for all of these 38 liquids with the only exception being octane with AAD=4.30%. Thus, the new correlation gives better overall results in comparison with other correlations for these 38 normal saturated liquids.

  9. COBE's search for structure in the Big Bang

    NASA Technical Reports Server (NTRS)

    Soffen, Gerald (Editor); Guerny, Gene (Editor); Keating, Thomas (Editor); Moe, Karen (Editor); Sullivan, Walter (Editor); Truszkowski, Walt (Editor)

    1989-01-01

    The launch of Cosmic Background Explorer (COBE) and the definition of Earth Observing System (EOS) are two of the major events at NASA-Goddard. The three experiments contained in COBE (Differential Microwave Radiometer (DMR), Far Infrared Absolute Spectrophotometer (FIRAS), and Diffuse Infrared Background Experiment (DIRBE)) are very important in measuring the big bang. DMR measures the isotropy of the cosmic background (direction of the radiation). FIRAS looks at the spectrum over the whole sky, searching for deviations, and DIRBE operates in the infrared part of the spectrum gathering evidence of the earliest galaxy formation. By special techniques, the radiation coming from the solar system will be distinguished from that of extragalactic origin. Unique graphics will be used to represent the temperature of the emitting material. A cosmic event will be modeled of such importance that it will affect cosmological theory for generations to come. EOS will monitor changes in the Earth's geophysics during a whole solar color cycle.

  10. New correlation for the temperature-dependent viscosity for saturated liquids

    NASA Astrophysics Data System (ADS)

    Tian, Jianxiang; Zhang, Laibin

    2016-11-01

    Based on the recent progress on both the temperature dependence of surface tension [H. L. Yi, J. X. Tian, A. Mulero and I. Cachading, J. Therm. Anal. Calorim. 126 (2016) 1603, and the correlation between surface tension and viscosity of liquids [J. X. Tian and A. Mulero, Ind. Eng. Chem. Res. 53 (2014) 9499], we derived a new multiple parameter correlation to describe the temperature-dependent viscosity of liquids. This correlation is verified by comparing with data from NIST Webbook for 35 saturated liquids including refrigerants, hydrocarbons and others, in a wide temperature range from the triple point temperature to the one very near to the critical temperature. Results show that this correlation predicts the NIST data with high accuracy with absolute average deviation (AAD) less than 1% for 21 liquids and more than 3% for only four liquids, and is clearly better than the popularly used Vogel-Fulcher-Tamman (VFT) correlation.

  11. A Group Decision Framework with Intuitionistic Preference Relations and Its Application to Low Carbon Supplier Selection.

    PubMed

    Tong, Xiayu; Wang, Zhou-Jing

    2016-09-19

    This article develops a group decision framework with intuitionistic preference relations. An approach is first devised to rectify an inconsistent intuitionistic preference relation to derive an additive consistent one. A new aggregation operator, the so-called induced intuitionistic ordered weighted averaging (IIOWA) operator, is proposed to aggregate individual intuitionistic fuzzy judgments. By using the mean absolute deviation between the original and rectified intuitionistic preference relations as an order inducing variable, the rectified consistent intuitionistic preference relations are aggregated into a collective preference relation. This treatment is presumably able to assign different weights to different decision-makers' judgments based on the quality of their inputs (in terms of consistency of their original judgments). A solution procedure is then developed for tackling group decision problems with intuitionistic preference relations. A low carbon supplier selection case study is developed to illustrate how to apply the proposed decision model in practice.

  12. Astrometric cosmology .

    NASA Astrophysics Data System (ADS)

    Lattanzi, M. G.

    The accurate measurement of the motions of stars in our Galaxy can provide access to the cosmological signatures in the disk and halo, while astrometric experiments from within our Solar System can uniquely probe possible deviations from General Relativity. This article will introduce to the fact that astrometry has the potential, thanks also to impressive technological advancements, to become a key player in the field of local cosmology. For example, accurate absolute kinematics at the scale of the Milky Way can, for the first time in situ, account for the predictions made by the cold dark matter model for the Galactic halo, and eventually map out the distribution of dark matter, or other formation mechanisms, required to explain the signatures recently identified in the old component of the thick disk. Final notes dwell on to what extent Gaia can fulfill the expectations of astrometric cosmology and on what must instead be left to future, specifically designed, astrometric experiments.

  13. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  14. Kinetic energy spectra in thermionic emission from small tungsten cluster anions: evidence for nonclassical electron capture.

    PubMed

    Concina, Bruno; Baguenard, Bruno; Calvo, Florent; Bordas, Christian

    2010-03-14

    The delayed electron emission from small mass-selected anionic tungsten clusters W(n)(-) has been studied for sizes in the range 9 < or = n < or = 21. Kinetic energy spectra have been measured for delays of about 100 ns after laser excitation by a velocity-map imaging spectrometer. They are analyzed in the framework of microreversible statistical theories. The low-energy behavior shows some significant deviations with respect to the classical Langevin capture model, which we interpret as possibly due to the influence of quantum dynamical effects such as tunneling through the centrifugal barrier, rather than shape effects. The cluster temperature has been extracted from both the experimental kinetic energy spectrum and the absolute decay rate. Discrepancies between the two approaches suggest that the sticking probability can be as low as a few percent for the smallest clusters.

  15. A Group Decision Framework with Intuitionistic Preference Relations and Its Application to Low Carbon Supplier Selection

    PubMed Central

    Tong, Xiayu; Wang, Zhou-Jing

    2016-01-01

    This article develops a group decision framework with intuitionistic preference relations. An approach is first devised to rectify an inconsistent intuitionistic preference relation to derive an additive consistent one. A new aggregation operator, the so-called induced intuitionistic ordered weighted averaging (IIOWA) operator, is proposed to aggregate individual intuitionistic fuzzy judgments. By using the mean absolute deviation between the original and rectified intuitionistic preference relations as an order inducing variable, the rectified consistent intuitionistic preference relations are aggregated into a collective preference relation. This treatment is presumably able to assign different weights to different decision-makers’ judgments based on the quality of their inputs (in terms of consistency of their original judgments). A solution procedure is then developed for tackling group decision problems with intuitionistic preference relations. A low carbon supplier selection case study is developed to illustrate how to apply the proposed decision model in practice. PMID:27657097

  16. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  17. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.

  18. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-02

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  19. Isometric Arm Strength and Subjective Rating of Upper Limb Fatigue in Two-Handed Carrying Tasks

    PubMed Central

    Li, Kai Way; Chiu, Wen-Sheng

    2015-01-01

    Sustained carrying could result in muscular fatigue of the upper limb. Ten male and ten female subjects were recruited for measurements of isometric arm strength before and during carrying a load for a period of 4 minutes. Two levels of load of carrying were tested for each of the male and female subjects. Exponential function based predictive equations for the isometric arm strength were established. The mean absolute deviations of these models in predicting the isometric arm strength were in the range of 3.24 to 17.34 N. Regression analyses between the subjective ratings of upper limb fatigue and force change index (FCI) for the carrying were also performed. The results indicated that the subjective rating of muscular fatigue may be estimated by multiplying the FCI with a constant. The FCI may, therefore, be adopted as an index to assess muscular fatigue for two-handed carrying tasks. PMID:25794159

  20. Isometric arm strength and subjective rating of upper limb fatigue in two-handed carrying tasks.

    PubMed

    Li, Kai Way; Chiu, Wen-Sheng

    2015-01-01

    Sustained carrying could result in muscular fatigue of the upper limb. Ten male and ten female subjects were recruited for measurements of isometric arm strength before and during carrying a load for a period of 4 minutes. Two levels of load of carrying were tested for each of the male and female subjects. Exponential function based predictive equations for the isometric arm strength were established. The mean absolute deviations of these models in predicting the isometric arm strength were in the range of 3.24 to 17.34 N. Regression analyses between the subjective ratings of upper limb fatigue and force change index (FCI) for the carrying were also performed. The results indicated that the subjective rating of muscular fatigue may be estimated by multiplying the FCI with a constant. The FCI may, therefore, be adopted as an index to assess muscular fatigue for two-handed carrying tasks.

Top