2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
ERIC Educational Resources Information Center
Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.
2009-01-01
The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…
Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli
2016-07-08
The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.
Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Brown, R. L.
1978-01-01
Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Research of Face Recognition with Fisher Linear Discriminant
NASA Astrophysics Data System (ADS)
Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.
2018-01-01
Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000
NASA Astrophysics Data System (ADS)
Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.
2018-04-01
The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.
Improved quality control of [18F]fluoromethylcholine.
Nader, Michael; Reindl, Dietmar; Eichinger, Reinhard; Beheshti, Mohsen; Langsteger, Werner
2011-11-01
With respect to the broad application of [(18)F-methyl]fluorocholine (FCH), there is a need for a safe, but also efficient and convenient way for routine quality control of FCH. Therefore, a GC- method should be developed and validated which allows the simultaneous quantitation of all chemical impurities and residual solvents such as acetonitrile, ethanol, dibromomethane and N,N-dimethylaminoethanol. Analytical GC has been performed with a GC-capillary column Optima 1701 (50 m×0.32 mm), and a pre-column deactivated capillary column phenyl-Sil (10 m×0.32) in line with a flame ionization detector (FID) was used. The validation includes the following tests: specificity, range, accuracy, linearity, precision, limit of detection (LOD) and limit of quantitation (LOQ) of all listed substances. The described GC method has been successfully used for the quantitation of the listed chemical impurities. The specificity of the GC separation has been proven by demonstrating that the appearing peaks are completely separated from each other and that a resolution R≥1.5 for the separation of the peaks could be achieved. The specified range confirmed that the analytical procedure provides an acceptable degree of linearity, accuracy and precision. For each substance, a range from 2% to 120% of the specification limit could be demonstrated. The corresponding LOD values were determined and were much lower than the specification limits. An efficient and convenient GC method for the quality control of FCH has been developed and validated which meets all acceptance criteria in terms of linearity, specificity, precision, accuracy, LOD and LOQ. Copyright © 2011 Elsevier Inc. All rights reserved.
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
Farrar, Christian T; Dai, Guangping; Novikov, Mikhail; Rosenzweig, Anthony; Weissleder, Ralph; Rosen, Bruce R; Sosnovik, David E
2008-06-01
Off-resonance imaging (ORI) techniques are being increasingly used to image iron oxide imaging agents such as monocrystalline iron oxide nanoparticles (MION). However, the diagnostic accuracy, linearity, and field dependence of ORI have not been fully characterized. In this study, the sensitivity, specificity, and linearity of ORI were thus examined as a function of both MION concentration and magnetic field strength (4.7 and 14 T). MION phantoms with and without an air interface as well as MION uptake in a mouse model of healing myocardial infarction were imaged. MION-induced resonance shifts were shown to increase linearly with MION concentration. In contrast, the ORI signal/sensitivity was highly non-linear, initially increasing with MION concentration until T2 became comparable to the TE and decreasing thereafter. The specificity of ORI to distinguish MION-induced resonance shifts from on-resonance water was found to decrease with increasing field because of the increased on-resonance water linewidths (15 Hz at 4.7 T versus 45 Hz at 14 T). Large resonance shifts ( approximately 300 Hz) were observed at air interfaces at 4.7 T, both in vitro and in vivo, and led to poor ORI specificity for MION concentrations less than 150 microg Fe/mL. The in vivo ORI sensitivity was sufficient to detect the accumulation of MION in macrophages infiltrating healing myocardial infarcts, but the specificity was limited by non-specific areas of positive contrast at the air/tissue interfaces of the thoracic wall and the descending aorta. Improved specificity and linearity can, however, be expected at lower fields where decreased on-resonance water linewidths, reduced air-induced resonance shifts, and longer T2 relaxation times are observed. The optimal performance of ORI will thus likely be seen at low fields, with moderate MION concentrations and with sequences containing very short TEs. Copyright (c) 2007 John Wiley & Sons, Ltd.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M
2016-09-01
To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.
Madison, Matthew J; Bradshaw, Laine P
2015-06-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Di Claudio, Gianluca; Mirzaei, Hadis; Leong, Rupert; Grisan, Enrico
2016-03-01
Celiac disease (CD) is an immune-mediated enteropathy triggered by exposure to gluten and similar proteins, affecting genetically susceptible persons, increasing their risk of different complications. Small bowels mucosa damage due to CD involves various degrees of endoscopically relevant lesions, which are not easily recognized: their overall sensitivity and positive predictive values are poor even when zoom-endoscopy is used. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to qualitative evaluate mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a method for automatically classifying CLE images into three different classes: normal regions, villous atrophy and crypt hypertrophy. This classification is performed after a features selection process, in which four features are extracted from each image, through the application of homomorphic filtering and border identification through Canny and Sobel operators. Three different classifiers have been tested on a dataset of 67 different images labeled by experts in three classes (normal, VA and CH): linear approach, Naive-Bayes quadratic approach and a standard quadratic analysis, all validated with a ten-fold cross validation. Linear classification achieves 82.09% accuracy (class accuracies: 90.32% for normal villi, 82.35% for VA and 68.42% for CH, sensitivity: 0.68, specificity 1.00), Naive Bayes analysis returns 83.58% accuracy (90.32% for normal villi, 70.59% for VA and 84.21% for CH, sensitivity: 0.84 specificity: 0.92), while the quadratic analysis achieves a final accuracy of 94.03% (96.77% accuracy for normal villi, 94.12% for VA and 89.47% for CH, sensitivity: 0.89, specificity: 0.98).
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Performance testing and results of the first Etec CORE-2564
NASA Astrophysics Data System (ADS)
Franks, C. Edward; Shikata, Asao; Baker, Catherine A.
1993-03-01
In order to be able to write 64 megabit DRAM reticles, to prepare to write 256 megabit DRAM reticles and in general to meet the current and next generation mask and reticle quality requirements, Hoya Micro Mask (HMM) installed in 1991 the first CORE-2564 Laser Reticle Writer from Etec Systems, Inc. The system was delivered as a CORE-2500XP and was subsequently upgraded to a 2564. The CORE (Custom Optical Reticle Engraver) system produces photomasks with an exposure strategy similar to that employed by an electron beam system, but it uses a laser beam to deliver the photoresist exposure energy. Since then the 2564 has been tested by Etec's standard Acceptance Test Procedure and by several supplementary HMM techniques to insure performance to all the Etec advertised specifications and certain additional HMM requirements that were more demanding and/or more thorough than the advertised specifications. The primary purpose of the HMM tests was to more closely duplicate mask usage. The performance aspects covered by the tests include registration accuracy and repeatability; linewidth accuracy, uniformity and linearity; stripe butting; stripe and scan linearity; edge quality; system cleanliness; minimum geometry resolution; minimum address size and plate loading accuracy and repeatability.
Overcoming learning barriers through knowledge management.
Dror, Itiel E; Makany, Tamas; Kemp, Jonathan
2011-02-01
The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
Linear array ultrasonography to stage rectal neoplasias suitable for local treatment.
Ravizza, Davide; Tamayo, Darina; Fiori, Giancarla; Trovato, Cristina; De Roberto, Giuseppe; de Leone, Annalisa; Crosta, Cristiano
2011-08-01
Because of the many therapeutic options available, a reliable staging is crucial for rectal neoplasia management. Adenomas and cancers limited to the submucosa without lymph node involvement may be treated locally. The aim of this study is to evaluate the diagnostic accuracy of endorectal ultrasonography in the staging of neoplasias suitable for local treatment. We considered all patients who underwent endorectal ultrasonography between 2001 and 2010. The study population consisted of 92 patients with 92 neoplasias (68 adenocarcinomas and 24 adenomas). A 5 and 7.5MHz linear array echoendoscope was used. The postoperative histopathologic result was compared with the preoperative staging defined by endorectal ultrasonography. Adenomas and cancers limited to the submucosa were considered together (pT0-1). The sensitivity, specificity, overall accuracy rate, positive predictive value, and negative predictive value of endorectal ultrasonography for pT0-1 were 86%, 95.6%, 91.3%, 94.9% and 88.7%. Those for nodal involvement were 45.4%, 95.5%, 83%, 76.9% and 84%, with 3 false positive results and 12 false negative. For combined pT0-1 and pN0, endorectal ultrasonography showed an 87.5% sensitivity, 95.9% specificity, 92% overall accuracy rate, 94.9% positive predictive value and 90.2% negative predictive value. Endorectal linear array ultrasonography is a reliable tool to detect rectal neoplasias suitable for local treatment. Copyright © 2011 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.
Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A
2010-01-10
As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.
Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme
2017-06-01
To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p < 0.02), while specificity was significantly higher for supine sampling compared with 24-h urine, 95 vs. 90% (p < 0.03). Partial areas under the curve were 0.942, 0.913, and 0.932 for supine sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.
Shandilya, Sharad; Kurz, Michael C.; Ward, Kevin R.; Najarian, Kayvan
2016-01-01
Objective The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR), rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA) patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals. Materials and Methods Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF) was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI) model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA) technique. Results 358 defibrillations were evaluated (218 unsuccessful and 140 successful). Non-linear properties (Lyapunov exponent > 0) of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity) outperformed AMSA (53.6% sensitivity). At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity. Conclusion At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations. Addition of partial end-tidal carbon dioxide (PetCO2) signal improves accuracy and sensitivity of the MDI prediction model. PMID:26741805
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization
1992-05-01
High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization A Thesis Presented by Louis Joseph PoehIman, Captain, USAF B.S., U.S. Air...High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization by Louis Joseph Poehlman, Captain, USAF Submitted to the Department of...31 2-4 Attitude Determination and Control System Architecture ................. 33 3-1 Exact Linearization Using Nonlinear Feedback
Jöres, A P W; Heverhagen, J T; Bonél, H; Exadaktylos, A; Klink, T
2016-02-01
The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2 %, the specificity was 93.3 %, the positive predictive value was 91 %, and the negative predictive value was 57.5 %. The overall sensitivity for vertebral fractures was 16.7 %, and the specificity was 100 %. The sensitivity was 48.7 % and the specificity 98.2 % for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS. 40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT. The overall sensitivity of LS for truncal skeleton injuries in multiple-trauma patients was < 50 %. The diagnostic reference standard MSCT is the preferred and reliable imaging modality. LS may be valuable for quick detection of extremity fractures. © Georg Thieme Verlag KG Stuttgart · New York.
Vignon, P; Spencer, K T; Rambaud, G; Preux, P M; Krauss, D; Balasia, B; Lang, R M
2001-06-01
The relatively low specificity of transesophageal echocardiography (TEE) for the diagnosis of aortic dissection (AD) or traumatic disruption of the aorta (TDA) has been attributed to linear artifacts. We sought to determine the incidence of intra-aortic linear artifacts in a cohort of patients with suspected AD or TDA, to establish the differential TEE diagnostic criteria between these artifacts and true aortic flaps, and to evaluate their impact on TEE diagnostic accuracy. During an 8-year period, patients at high risk of AD (n = 261) or TDA (n = 90) who underwent a TEE study and had confirmed final diagnoses were studied. In an initial retrospective series, linear artifacts were observed within the ascending and descending aorta in 59 of 230 patients (26%) and 17 of 230 patients (7%), respectively. TEE findings associated with linear artifacts in the ascending aorta were as follows: displacement parallel to aortic walls; similar blood flow velocities on both sides; angle with the aortic wall > 85 degrees; and thickness > 2.5 mm. Diagnostic criteria of reverberant images in the descending aorta were as follows: displacement parallel to aortic walls, overimposition of blood flow, and similar blood flow velocities on both sides of the image. In a subsequent prospective series (n = 121), systematic use of these diagnostic criteria resulted in improved TEE specificity for the identification of true intra-aortic flaps. Misleading intra-aortic linear artifacts are frequently observed in patients undergoing a TEE study for suspected AD or TDA. Routine use of the herein-proposed diagnostic criteria promises to further improve TEE diagnostic accuracy in the setting of severely ill patients with potential need for prompt surgery.
Ellingson, Laura D; Hibbing, Paul R; Kim, Youngwon; Frey-Law, Laura A; Saint-Maurice, Pedro F; Welk, Gregory J
2017-06-01
The wrist is increasingly being used as the preferred site for objectively assessing physical activity but the relative accuracy of processing methods for wrist data has not been determined. This study evaluates the validity of four processing methods for wrist-worn ActiGraph (AG) data against energy expenditure (EE) measured using a portable metabolic analyzer (OM; Oxycon mobile) and the Compendium of physical activity. Fifty-one adults (ages 18-40) completed 15 activities ranging from sedentary to vigorous in a laboratory setting while wearing an AG and the OM. Estimates of EE and categorization of activity intensity were obtained from the AG using a linear method based on Hildebrand cutpoints (HLM), a non-linear modification of this method (HNLM), and two methods developed by Staudenmayer based on a Linear Model (SLM) and using random forest (SRF). Estimated EE and classification accuracy were compared to the OM and Compendium using Bland-Altman plots, equivalence testing, mean absolute percent error (MAPE), and Kappa statistics. Overall, classification agreement with the Compendium was similar across methods ranging from a Kappa of 0.46 (HLM) to 0.54 (HNLM). However, specificity and sensitivity varied by method and intensity, ranging from a sensitivity of 0% (HLM for sedentary) to a specificity of ~99% for all methods for vigorous. None of the methods was significantly equivalent to the OM (p > 0.05). Across activities, none of the methods evaluated had a high level of agreement with criterion measures. Additional research is needed to further refine the accuracy of processing wrist-worn accelerometer data.
Volume of the human septal forebrain region is a predictor of source memory accuracy.
Butler, Tracy; Blackmon, Karen; Zaborszky, Laszlo; Wang, Xiuyuan; DuBois, Jonathan; Carlson, Chad; Barr, William B; French, Jacqueline; Devinsky, Orrin; Kuzniecky, Ruben; Halgren, Eric; Thesen, Thomas
2012-01-01
Septal nuclei, components of basal forebrain, are strongly and reciprocally connected with hippocampus, and have been shown in animals to play a critical role in memory. In humans, the septal forebrain has received little attention. To examine the role of human septal forebrain in memory, we acquired high-resolution magnetic resonance imaging scans from 25 healthy subjects and calculated septal forebrain volume using recently developed probabilistic cytoarchitectonic maps. We indexed memory with the California Verbal Learning Test-II. Linear regression showed that bilateral septal forebrain volume was a significant positive predictor of recognition memory accuracy. More specifically, larger septal forebrain volume was associated with the ability to recall item source/context accuracy. Results indicate specific involvement of septal forebrain in human source memory, and recall the need for additional research into the role of septal nuclei in memory and other impairments associated with human diseases.
Gromski, Piotr S; Correa, Elon; Vaughan, Andrew A; Wedge, David C; Turner, Michael L; Goodacre, Royston
2014-11-01
Accurate detection of certain chemical vapours is important, as these may be diagnostic for the presence of weapons, drugs of misuse or disease. In order to achieve this, chemical sensors could be deployed remotely. However, the readout from such sensors is a multivariate pattern, and this needs to be interpreted robustly using powerful supervised learning methods. Therefore, in this study, we compared the classification accuracy of four pattern recognition algorithms which include linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA), random forests (RF) and support vector machines (SVM) which employed four different kernels. For this purpose, we have used electronic nose (e-nose) sensor data (Wedge et al., Sensors Actuators B Chem 143:365-372, 2009). In order to allow direct comparison between our four different algorithms, we employed two model validation procedures based on either 10-fold cross-validation or bootstrapping. The results show that LDA (91.56% accuracy) and SVM with a polynomial kernel (91.66% accuracy) were very effective at analysing these e-nose data. These two models gave superior prediction accuracy, sensitivity and specificity in comparison to the other techniques employed. With respect to the e-nose sensor data studied here, our findings recommend that SVM with a polynomial kernel should be favoured as a classification method over the other statistical models that we assessed. SVM with non-linear kernels have the advantage that they can be used for classifying non-linear as well as linear mapping from analytical data space to multi-group classifications and would thus be a suitable algorithm for the analysis of most e-nose sensor data.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification.
Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong
2017-01-01
This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Menzies, Sandra L.; Kadwad, Vijay; Pawloski, Lucia C.; Lin, Tsai-Lien; Baughman, Andrew L.; Martin, Monte; Tondella, Maria Lucia C.; Meade, Bruce D.
2009-01-01
Adequately sensitive and specific methods to diagnose pertussis in adolescents and adults are not widely available. Currently, no Food and Drug Administration-approved diagnostic assays are available for the serodiagnosis of Bordetella pertussis. Since concentrations of B. pertussis-specific antibodies tend to be high during the later phases of disease, a simple, rapid, easily transferable serodiagnostic test was developed. This article describes test development, initial evaluation of a prototype kit enzyme-linked immunosorbent assay (ELISA) in an interlaboratory collaborative study, and analytical validation. The data presented here demonstrate that the kit met all prespecified criteria for precision, linearity, and accuracy for samples with anti-pertussis toxin (PT) immunoglobulin G (IgG) antibody concentrations in the range of 50 to 150 ELISA units (EU)/ml, the range believed to be most relevant for serodiagnosis. The assay met the precision and linearity criteria for a wider range, namely, from 50 to 200 EU/ml; however, the accuracy criterion was not met at 200 EU/ml. When the newly adopted World Health Organization International Standard for pertussis antiserum (human) reference reagent was used to evaluate accuracy, the accuracy criteria were met from 50 to 200 international units/ml. In conclusion, the IgG anti-PT ELISA met all assay validation parameters within the range considered most relevant for serodiagnosis. This ELISA was developed and analytically validated as a user-friendly kit that can be used in both qualitative and quantitative formats. The technology for producing the kit is transferable to public health laboratories. PMID:19864485
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Diwanji, T; Zhang, B
2015-06-15
Purpose: To determine the ability of pharmacokinetic parameters derived from dynamic contrast-enhanced MRI (DCE- MRI) acquired before and during concurrent chemotherapy and radiation therapy to predict clinical response in patients with head and neck cancer. Methods: Eleven patients underwent a DCE-MRI scan at three time points: 1–2 weeks before treatment, 4–5 weeks after treatment initiation, and 3–4 months after treatment completion. Post-processing of MRI data included correction to reduce motion artifacts. The arterial input function was obtained by measuring the dynamic tracer concentration in the jugular veins. The volume transfer constant (Ktrans), extracellular extravascular volume fraction (ve), rate constant (Kep;more » Kep = Ktrans/ve), and plasma volume fraction (vp) were computed for primary tumors and cervical nodal masses. Patients were categorized into two groups based on response to therapy at 3–4 months: responders (no evidence of disease) and partial responders (regression of disease). Responses of the primary tumor and nodes were evaluated separately. A linear classifier and receiver operating characteristic curve analyses were used to determine the best model for discrimination of responders from partial responders. Results: When the above pharmacokinetic parameters of the primary tumor measured before and during treatment were incorporated into the linear classifier, a discriminative accuracy of 88.9%, with sensitivity =100% and specificity = 66.7%, was observed between responders (n=6) and partial responders (n=3) for the primary tumor with the corresponding accuracy = 44.4%, sensitivity = 66.7%, and specificity of 0% for nodal masses. When only pre-treatment parameters were used, the accuracy decreased to 66.7%, with sensitivity = 66.7% and specificity = 66.7% for the primary tumor and decreased to 33.3%, sensitivity of 50%, and specificity of 0% for nodal masses. Conclusion: Higher accuracy, sensitivity, and specificity were obtained using DCE-MRI-derived pharmacokinetic parameters acquired before and during treatment as compared with those derived from the pre-treatment time-point, exclusively.« less
47 CFR 73.1215 - Specifications for indicating instruments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... used by broadcast stations: (a) Linear scale instruments: (1) Length of scale shall not be less than 2.3 inches (5.8 cm). (2) Accuracy shall be at least 2 percent of the full scale reading. (3) The maximum rating of the meter shall be such that it does not read off scale during modulation or normal...
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.
Thompson, Clarissa A; Opfer, John E
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-11-21
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the Neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324
UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms
NASA Astrophysics Data System (ADS)
Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.
2016-01-01
An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-01
Purpose To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Methods 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. Results 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. Conclusions CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR. PMID:28042958
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-31
To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR.
Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K
2011-01-10
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.
Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.
Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon
2014-10-01
In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees (ARTBLA). One-way ANOVA found significant differences (P<.05) among all systems for Iy, Ry, Lx, Ly, and twisting. Generally, vertical linear displacements were less likely to reach the threshold of clinical detectability compared with anteroposterior or mediolateral linear displacements. The overall repositioning accuracy of DENSCR was comparable with 4 magnetic mounting systems (DENMAG, ARTBLU, ARTWHI, ARTBLA). DENCON exhibited the worst repositioning accuracy for Iy, Ry, Lx, Ly, and twisting. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
Energy expenditure estimation during daily military routine with body-fixed sensors.
Wyss, Thomas; Mäder, Urs
2011-05-01
The purpose of this study was to develop and validate an algorithm for estimating energy expenditure during the daily military routine on the basis of data collected using body-fixed sensors. First, 8 volunteers completed isolated physical activities according to an established protocol, and the resulting data were used to develop activity-class-specific multiple linear regressions for physical activity energy expenditure on the basis of hip acceleration, heart rate, and body mass as independent variables. Second, the validity of these linear regressions was tested during the daily military routine using indirect calorimetry (n = 12). Volunteers' mean estimated energy expenditure did not significantly differ from the energy expenditure measured with indirect calorimetry (p = 0.898, 95% confidence interval = -1.97 to 1.75 kJ/min). We conclude that the developed activity-class-specific multiple linear regressions applied to the acceleration and heart rate data allow estimation of energy expenditure in 1-minute intervals during daily military routine, with accuracy equal to indirect calorimetry.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem
2013-09-01
Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.
Simeone, Piero; Valentini, Pier Paolo; Pizzoferrato, Roberto; Scudieri, Folco
2011-01-01
The purpose of this in vitro study was to compare the dimensional accuracy of the pickup impression technique using a modular individual tray (MIT) and using a standard individual tray (ST) for multiple internal-connection implants. The roles of both materials and geometric misfits were considered. First, because the MIT relies on the stiffness and elasticity of acrylic resin material, a preliminary investigation of the resin volume contraction during curing and polymerization was done. Then, two sets of specimens were tested to compare the accuracy of the MIT (test group) to that of the ST (control group). The linear and angular displacements of the transfer copings were measured and compared during three different stages of the impression procedure. Experimental measurements were performed with a computerized coordinate measuring machine. The curing dynamic of the acrylic resin was strongly dependent on the physical properties of the acrylic material and the powder/liquid ratio. Specifically, an increase in the powder/liquid ratio accelerated resin polymerization (curing time decreases by 70%) and reduced the final volume contraction by 45%. However, the total shrinkage never exceeded the elastic limits of the material; hence, it did not affect the coping's stability. In the test group, linear errors were reduced by 55% and angular errors were reduced by 65%. Linear and angular displacements of the transfer copings were significantly reduced with the MIT technique, which led to higher dimensional accuracy versus the ST group. The MIT approach, in combination with a thin and uniform amount of acrylic resin in the pickup impression technique, showed no significant permanent distortions in multiple misalignment internal-connection implants compared to the ST technique.
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
Williams, D. Keith; Muddiman, David C.
2008-01-01
Fourier transform ion cyclotron resonance mass spectrometry has the ability to achieve unprecedented mass measurement accuracy (MMA); MMA is one of the most significant attributes of mass spectrometric measurements as it affords extraordinary molecular specificity. However, due to space-charge effects, the achievable MMA significantly depends on the total number of ions trapped in the ICR cell for a particular measurement. Even through the use of automatic gain control (AGC), the total ion population is not constant between spectra. Multiple linear regression calibration in conjunction with AGC is utilized in these experiments to formally account for the differences in total ion population in the ICR cell between the external calibration spectra and experimental spectra. This ability allows for the extension of dynamic range of the instrument while allowing mean MMA values to remain less than 1 ppm. In addition, multiple linear regression calibration is used to account for both differences in total ion population in the ICR cell as well as relative ion abundance of a given species, which also affords mean MMA values at the parts-per-billion level. PMID:17539605
Reduced kernel recursive least squares algorithm for aero-engine degradation prediction
NASA Astrophysics Data System (ADS)
Zhou, Haowen; Huang, Jinquan; Lu, Feng
2017-10-01
Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.
Wiegand, Russell F; Klette, Kevin L; Stout, Peter R; Gehlhausen, Jay M
2002-10-01
In an effort to determine a practical, efficient, and economical alternative for the use of a radioimmunoassay (RIA) for the detection of lysergic acid diethylamide (LSD) in human urine, the performance of two photometric immunoassays (Dade Behring EMIT II and Microgenics CEDIA) and the Diagnostics Products Corp. (DPC) RIA were compared. Precision, accuracy, and linearity of the 3 assays were determined by testing 60 replicates (10 for RIA) at 5 different concentrations below and above the 500-pg/mL LSD cut-off. The CEDIA and RIA exhibited better accuracy and precision than the EMIT II immunoassay. In contrast, the EMIT II and CEDIA demonstrated superior linearity r2 = 0.9809 and 0.9540, respectively, as compared with the RIA (r2 = 0.9062). The specificity of the three assays was assessed using compounds that have structural and chemical properties similar to LSD, common over-the-counter products, prescription drugs and some of their metabolites, and other drugs of abuse. Of the 144 compounds studied, the EMIT II cross-reacted with twice as many compounds as did the CEDIA and RIA. Specificity was also assessed in 221 forensic human urine specimens that previously screened positive for LSD by the EMIT II assay. Of these, only 11 tested positive by CEDIA, and 3 were positive by RIA. This indicated a comparable specificity performance between CEDIA and RIA. This also was consistent with a previously reported high false-positive rate of EMIT II (low specificity). Each of the immunoassays correctly identified LSD in 23 out of 24 human urine specimens that had previously been found to contain LSD by gas chromatography-mass spectrometry at a cut-off concentration of 200 pg/mL. The CEDIA exhibited superior precision, accuracy, and decreased cross-reactivity to compounds other than LSD as compared with the EMIT II assay and does not necessitate the handling of radioactive materials.
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-01-01
Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-02-16
Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
NASA Astrophysics Data System (ADS)
Toro, E. F.; Titarev, V. A.
2005-01-01
In this paper we develop non-linear ADER schemes for time-dependent scalar linear and non-linear conservation laws in one-, two- and three-space dimensions. Numerical results of schemes of up to fifth order of accuracy in both time and space illustrate that the designed order of accuracy is achieved in all space dimensions for a fixed Courant number and essentially non-oscillatory results are obtained for solutions with discontinuities. We also present preliminary results for two-dimensional non-linear systems.
New machine-learning algorithms for prediction of Parkinson's disease
NASA Astrophysics Data System (ADS)
Mandal, Indrajit; Sairam, N.
2014-03-01
This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
An alternative respiratory sounds classification system utilizing artificial neural networks.
Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen
2015-01-01
Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.
Pakkala, T; Kuusela, L; Ekholm, M; Wenzel, A; Haiter-Neto, F; Kortesniemi, M
2012-01-01
In clinical practice, digital radiographs taken for caries diagnostics are viewed on varying types of displays and usually in relatively high ambient lighting (room illuminance) conditions. Our purpose was to assess the effect of room illuminance and varying display types on caries diagnostic accuracy in digital dental radiographs. Previous studies have shown that the diagnostic accuracy of caries detection is significantly better in reduced lighting conditions. Our hypothesis was that higher display luminance could compensate for this in higher ambient lighting conditions. Extracted human teeth with approximal surfaces clinically ranging from sound to demineralized were radiographed and evaluated by 3 observers who detected carious lesions on 3 different types of displays in 3 different room illuminance settings ranging from low illumination, i.e. what is recommended for diagnostic viewing, to higher illumination levels corresponding to those found in an average dental office. Sectioning and microscopy of the teeth validated the presence or absence of a carious lesion. Sensitivity, specificity and accuracy were calculated for each modality and observer. Differences were estimated by analyzing the binary data assuming the added effects of observer and modality in a generalized linear model. The observers obtained higher sensitivities in lower illuminance settings than in higher illuminance settings. However, this was related to a reduction in specificity, which meant that there was no significant difference in overall accuracy. Contrary to our hypothesis, there were no significant differences between the accuracy of different display types. Therefore, different displays and room illuminance levels did not affect the overall accuracy of radiographic caries detection. Copyright © 2012 S. Karger AG, Basel.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Uranium Detection - Technique Validation Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colletti, Lisa Michelle; Garduno, Katherine; Lujan, Elmer J.
As a LANL activity for DOE/NNSA in support of SHINE Medical Technologies™ ‘Accelerator Technology’ we have been investigating the application of UV-vis spectroscopy for uranium analysis in solution. While the technique has been developed specifically for sulfate solutions, the proposed SHINE target solutions, it can be adapted to a range of different solution matrixes. The FY15 work scope incorporated technical development that would improve accuracy, specificity, linearity & range, precision & ruggedness, and comparative analysis. Significant progress was achieved throughout FY 15 addressing these technical challenges, as is summarized in this report. In addition, comparative analysis of unknown samples usingmore » the Davies-Gray titration technique highlighted the importance of controlling temperature during analysis (impacting both technique accuracy and linearity/range). To fully understand the impact of temperature, additional experimentation and data analyses were performed during FY16. The results from this FY15/FY16 work were presented in a detailed presentation, LA-UR-16-21310, and an update of this presentation is included with this short report summarizing the key findings. The technique is based on analysis of the most intense U(VI) absorbance band in the visible region of the uranium spectra in 1 M H 2SO 4, at λ max = 419.5 nm.« less
Fall Detection Using Smartphone Audio Features.
Cheffena, Michael
2016-07-01
An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.
Gujral, Rajinder Singh; Haque, Sk Manirul
2010-01-01
A simple and sensitive UV spectrophotometric method was developed and validated for the simultaneous determination of Potassium Clavulanate (PC) and Amoxicillin Trihydrate (AT) in bulk, pharmaceutical formulations and in human urine samples. The method was linear in the range of 0.2–8.5 μg/ml for PC and 6.4–33.6 μg/ml for AT. The absorbance was measured at 205 and 271 nm for PC and AT respectively. The method was validated with respect to accuracy, precision, specificity, ruggedness, robustness, limit of detection and limit of quantitation. This method was used successfully for the quality assessment of four PC and AT drug products and in human urine samples with good precision and accuracy. This is found to be simple, specific, precise, accurate, reproducible and low cost UV Spectrophotometric method. PMID:23675211
Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R
2017-10-01
To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p < 0.008) in the sacroiliac osteotomies, 6.2 mm (p < 0.008) and 5.8 mm (p < 0.032) in the biplanar supra-acetabular, 3 mm (p < 0.016) in the ischial and 2.2 mm (p < 0.032) and 2.6 mm (p < 0.008) in the parallel iliopubic osteotomies, with a mean linear deviation of 4.9 mm (p < 0.001) for all osteotomies. Of the manual osteotomies, 53% (n = 16) had a linear deviation > 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p < 0.001) in pitch and 2.94° (p < 0.001) in roll, comparing PSI and the standard manual technique. In an experimental study, computer-assisted planning and PSIs improved accuracy in pelvic tumour resections, bringing osteotomy results closer to the parameters set in pre-operative planning, as compared with standard manual techniques. Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.
Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves.
Katekhaye, S; Kale, M S; Laddha, K S
2012-01-01
A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C(18) column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r(2)>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves.
Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves
Katekhaye, S; Kale, M. S.; Laddha, K. S.
2012-01-01
A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C18 column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r2>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves. PMID:23204626
Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas
2015-06-30
We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Travel-time source-specific station correction improves location accuracy
NASA Astrophysics Data System (ADS)
Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo
2013-04-01
Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.
A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.
Ligorio, Gabriele; Sabatini, Angelo M
2015-08-01
Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
1990-02-15
electrical activity mapping procedures. It is necessary to employ approximately 20 electrodes to conduct full- scale brain mapping procedures, using a...animal groups, likewise, showed no observable differences in the animal’s exploratory behavior, nuzzle response, lid-corneal and ear reflexes, pain ...SPECIFICATIONS FOR THE ENVIRONICS SERIES 100 GAS STANDARDS GENERATOR Accuracy of Flow 0.15 % of Full Scale Linearity 0.15 % of Full Scale Repeatability 0.10
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
NASA Astrophysics Data System (ADS)
Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu F.; Pashkovskaya, N. V.; Andreichuk, D. R.
2015-03-01
We report a model of Mueller-matrix description of optical anisotropy of protein networks in biological tissues with allowance for the linear birefringence and dichroism. The model is used to construct the reconstruction algorithms of coordinate distributions of phase shifts and the linear dichroism coefficient. In the statistical analysis of such distributions, we have found the objective criteria of differentiation between benign and malignant tissues of the female reproductive system. From the standpoint of evidence-based medicine, we have determined the operating characteristics (sensitivity, specificity and accuracy) of the Mueller-matrix reconstruction method of optical anisotropy parameters and demonstrated its effectiveness in the differentiation of benign and malignant tumours.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José
2018-03-28
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
NASA Astrophysics Data System (ADS)
Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy
2011-08-01
Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.
Assessment of pedophilia using hemodynamic brain response to sexual stimuli.
Ponseti, Jorge; Granert, Oliver; Jansen, Olav; Wolff, Stephan; Beier, Klaus; Neutze, Janina; Deuschl, Günther; Mehdorn, Hubertus; Siebner, Hartwig; Bosinski, Hartmut
2012-02-01
Accurately assessing sexual preference is important in the treatment of child sex offenders. Phallometry is the standard method to identify sexual preference; however, this measure has been criticized for its intrusiveness and limited reliability. To evaluate whether spatial response pattern to sexual stimuli as revealed by a change in the blood oxygen level-dependent signal facilitates the identification of pedophiles. During functional magnetic resonance imaging, pedophilic and nonpedophilic participants were briefly exposed to same- and opposite-sex images of nude children and adults. We calculated differences in blood oxygen level-dependent signals to child and adult sexual stimuli for each participant. The corresponding contrast images were entered into a group analysis to calculate whole-brain difference maps between groups. We calculated an expression value that corresponded to the group result for each participant. These expression values were submitted to 2 different classification algorithms: Fisher linear discriminant analysis and κ -nearest neighbor analysis. This classification procedure was cross-validated using the leave-one-out method. Section of Sexual Medicine, Medical School, Christian Albrechts University of Kiel, Kiel, Germany. We recruited 24 participants with pedophilia who were sexually attracted to either prepubescent girls (n = 11) or prepubescent boys (n = 13) and 32 healthy male controls who were sexually attracted to either adult women (n = 18) or adult men (n = 14). Sensitivity and specificity scores of the 2 classification algorithms. The highest classification accuracy was achieved by Fisher linear discriminant analysis, which showed a mean accuracy of 95% (100% specificity, 88% sensitivity). Functional brain response patterns to sexual stimuli contain sufficient information to identify pedophiles with high accuracy. The automatic classification of these patterns is a promising objective tool to clinically diagnose pedophilia.
Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy.
Farid, Nikdokht; Girard, Holly M; Kemmotsu, Nobuko; Smith, Michael E; Magda, Sebastian W; Lim, Wei Y; Lee, Roland R; McDonald, Carrie R
2012-08-01
To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration-cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Quantitative MR imaging-derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%-89.5%) and specificity (92.2%-94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice.
Detection of Epileptic Seizure Event and Onset Using EEG
Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul
2014-01-01
This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892
Wang, Shui-Hua; Phillips, Preetha; Sui, Yuxiu; Liu, Bin; Yang, Ming; Cheng, Hong
2018-03-26
Alzheimer's disease (AD) is a progressive brain disease. The goal of this study is to provide a new computer-vision based technique to detect it in an efficient way. The brain-imaging data of 98 AD patients and 98 healthy controls was collected using data augmentation method. Then, convolutional neural network (CNN) was used, CNN is the most successful tool in deep learning. An 8-layer CNN was created with optimal structure obtained by experiences. Three activation functions (AFs): sigmoid, rectified linear unit (ReLU), and leaky ReLU. The three pooling-functions were also tested: average pooling, max pooling, and stochastic pooling. The numerical experiments demonstrated that leaky ReLU and max pooling gave the greatest result in terms of performance. It achieved a sensitivity of 97.96%, a specificity of 97.35%, and an accuracy of 97.65%, respectively. In addition, the proposed approach was compared with eight state-of-the-art approaches. The method increased the classification accuracy by approximately 5% compared to state-of-the-art methods.
Does body mass index misclassify physically active young men.
Grier, Tyson; Canham-Chervak, Michelle; Sharp, Marilyn; Jones, Bruce H
2015-01-01
The purpose of this analysis was to determine the accuracy of age and gender adjusted BMI as a measure of body fat (BF) in U.S. Army Soldiers. BMI was calculated through measured height and weight (kg/m(2)) and body composition was determined by dual energy X-ray absorptiometry (DEXA). Linear regression was used to determine a BF prediction equation and examine the correlation between %BF and BMI. The sensitivity and specificity of BMI compared to %BF as measured by DEXA was calculated. Soldiers (n = 110) were on average 23 years old, with a BMI of 26.4, and approximately 18% BF. The correlation between BMI and %BF (R = 0.86) was strong (p < 0.01). A sensitivity of 77% and specificity of 100% were calculated when using Army age adjusted BMI thresholds. The overall accuracy in determining if a Soldier met Army BMI standards and were within the maximum allowable BF or exceeded BMI standards and were over the maximum allowable BF was 83%. Using adjusted BMI thresholds in populations where physical fitness and training are requirements of the job provides better accuracy in identifying those who are overweight or obese due to high BF.
NASA Astrophysics Data System (ADS)
Westphal, T.; Nijssen, R. P. L.
2014-12-01
The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.
Luenser, Arne; Kussmann, Jörg; Ochsenfeld, Christian
2016-09-28
We present a (sub)linear-scaling algorithm to determine indirect nuclear spin-spin coupling constants at the Hartree-Fock and Kohn-Sham density functional levels of theory. Employing efficient integral algorithms and sparse algebra routines, an overall (sub)linear scaling behavior can be obtained for systems with a non-vanishing HOMO-LUMO gap. Calculations on systems with over 1000 atoms and 20 000 basis functions illustrate the performance and accuracy of our reference implementation. Specifically, we demonstrate that linear algebra dominates the runtime of conventional algorithms for 10 000 basis functions and above. Attainable speedups of our method exceed 6 × in total runtime and 10 × in the linear algebra steps for the tested systems. Furthermore, a convergence study of spin-spin couplings of an aminopyrazole peptide upon inclusion of the water environment is presented: using the new method it is shown that large solvent spheres are necessary to converge spin-spin coupling values.
Wang, Hsin-Wei; Lin, Ya-Chi; Pai, Tun-Wen; Chang, Hao-Teng
2011-01-01
Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM) classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE) candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV). Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%), specificity (84.22%), PPV (32.07%), and Matthews' correlation coefficient (10.36%).
An extended sequence specificity for UV-induced DNA damage.
Chung, Long H; Murray, Vincent
2018-01-01
The sequence specificity of UV-induced DNA damage was determined with a higher precision and accuracy than previously reported. UV light induces two major damage adducts: cyclobutane pyrimidine dimers (CPDs) and pyrimidine(6-4)pyrimidone photoproducts (6-4PPs). Employing capillary electrophoresis with laser-induced fluorescence and taking advantages of the distinct properties of the CPDs and 6-4PPs, we studied the sequence specificity of UV-induced DNA damage in a purified DNA sequence using two approaches: end-labelling and a polymerase stop/linear amplification assay. A mitochondrial DNA sequence that contained a random nucleotide composition was employed as the target DNA sequence. With previous methodology, the UV sequence specificity was determined at a dinucleotide or trinucleotide level; however, in this paper, we have extended the UV sequence specificity to a hexanucleotide level. With the end-labelling technique (for 6-4PPs), the consensus sequence was found to be 5'-GCTC*AC (where C* is the breakage site); while with the linear amplification procedure, it was 5'-TCTT*AC. With end-labelling, the dinucleotide frequency of occurrence was highest for 5'-TC*, 5'-TT* and 5'-CC*; whereas it was 5'-TT* for linear amplification. The influence of neighbouring nucleotides on the degree of UV-induced DNA damage was also examined. The core sequences consisted of pyrimidine nucleotides 5'-CTC* and 5'-CTT* while an A at position "1" and C at position "2" enhanced UV-induced DNA damage. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.
1994-01-01
It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.
NASA Astrophysics Data System (ADS)
Sánchez, Daniel; Nieh, James C.; Hénaut, Yann; Cruz, Leopoldo; Vandame, Rémy
Several studies have examined the existence of recruitment communication mechanisms in stingless bees. However, the spatial accuracy of location-specific recruitment has not been examined. Moreover, the location-specific recruitment of reactivated foragers, i.e., foragers that have previously experienced the same food source at a different location and time, has not been explicitly examined. However, such foragers may also play a significant role in colony foraging, particularly in small colonies. Here we report that reactivated Scaptotrigona mexicana foragers can recruit with high precision to a specific food location. The recruitment precision of reactivated foragers was evaluated by placing control feeders to the left and the right of the training feeder (direction-precision tests) and between the nest and the training feeder and beyond it (distance-precision tests). Reactivated foragers arrived at the correct location with high precision: 98.44% arrived at the training feeder in the direction trials (five-feeder fan-shaped array, accuracy of at least +/-6° of azimuth at 50 m from the nest), and 88.62% arrived at the training feeder in the distance trials (five-feeder linear array, accuracy of at least +/-5 m or +/-10% at 50 m from the nest). Thus, S. mexicana reactivated foragers can find the indicated food source at a specific distance and direction with high precision, higher than that shown by honeybees, Apis mellifera, which do not communicate food location at such close distances to the nest.
Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller
2015-01-01
A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.
Using bivariate signal analysis to characterize the epileptic focus: the benefit of surrogates.
Andrzejak, R G; Chicharro, D; Lehnertz, K; Mormann, F
2011-04-01
The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear interdependence measure.
Using bivariate signal analysis to characterize the epileptic focus: The benefit of surrogates
NASA Astrophysics Data System (ADS)
Andrzejak, R. G.; Chicharro, D.; Lehnertz, K.; Mormann, F.
2011-04-01
The disease epilepsy is related to hypersynchronous activity of networks of neurons. While acute epileptic seizures are the most extreme manifestation of this hypersynchronous activity, an elevated level of interdependence of neuronal dynamics is thought to persist also during the seizure-free interval. In multichannel recordings from brain areas involved in the epileptic process, this interdependence can be reflected in an increased linear cross correlation but also in signal properties of higher order. Bivariate time series analysis comprises a variety of approaches, each with different degrees of sensitivity and specificity for interdependencies reflected in lower- or higher-order properties of pairs of simultaneously recorded signals. Here we investigate which approach is best suited to detect putatively elevated interdependence levels in signals recorded from brain areas involved in the epileptic process. For this purpose, we use the linear cross correlation that is sensitive to lower-order signatures of interdependence, a nonlinear interdependence measure that integrates both lower- and higher-order properties, and a surrogate-corrected nonlinear interdependence measure that aims to specifically characterize higher-order properties. We analyze intracranial electroencephalographic recordings of the seizure-free interval from 29 patients with an epileptic focus located in the medial temporal lobe. Our results show that all three approaches detect higher levels of interdependence for signals recorded from the brain hemisphere containing the epileptic focus as compared to signals recorded from the opposite hemisphere. For the linear cross correlation, however, these differences are not significant. For the nonlinear interdependence measure, results are significant but only of moderate accuracy with regard to the discriminative power for the focal and nonfocal hemispheres. The highest significance and accuracy is obtained for the surrogate-corrected nonlinear interdependence measure.
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.
1987-01-01
A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.
Elsayed, Naglaa Mostafa; Elkhatib, Yasser Atta
2016-03-01
Thyroid nodules are a common medical and surgical concern. Thyroid ultrasound (US) is the primary imaging modality used for initial evaluation and assortment of nodules for fine needle aspiration (FNA) cytology/biopsy. Ultrasound elastography (USE) is believed to improve the diagnostic accuracy of US in distinguishing benign from malignant nodules. The aim of the work described here is to evaluate the diagnostic criteria and accuracy of US and USE in the diagnosis of malignant thyroid nodules. A prospective study of 88 patients who have thyroid nodules was performed. US, color Doppler, and USE were evaluated using a Philips iU22 equipped with a 5 to 12 MHz, linear transducer, followed by FNA of the each scanned nodule. The most sensitive US criteria for malignant nodules were a height-to-width ratio greater than one and the absence of a halo sign (sensitivity 0.875% and 1.000%, respectively). The most specific criteria for malignancy were a spiculated/blurred margin and the presence of microcalcifications (specificity 0.968% and 0.888%, respectively). The receiver operating characteristic curve showed that the cutoff diagnostic criteria of malignancy are two US characteristics and an elastography score of 4. The diagnostic accuracy of US for malignant thyroid nodules increases by combining US and USE. © The Author(s) 2015.
Real-data comparison of data mining methods in prediction of diabetes in iran.
Tapak, Lily; Mahjub, Hossein; Hamidi, Omid; Poorolajal, Jalal
2013-09-01
Diabetes is one of the most common non-communicable diseases in developing countries. Early screening and diagnosis play an important role in effective prevention strategies. This study compared two traditional classification methods (logistic regression and Fisher linear discriminant analysis) and four machine-learning classifiers (neural networks, support vector machines, fuzzy c-mean, and random forests) to classify persons with and without diabetes. The data set used in this study included 6,500 subjects from the Iranian national non-communicable diseases risk factors surveillance obtained through a cross-sectional survey. The obtained sample was based on cluster sampling of the Iran population which was conducted in 2005-2009 to assess the prevalence of major non-communicable disease risk factors. Ten risk factors that are commonly associated with diabetes were selected to compare the performance of six classifiers in terms of sensitivity, specificity, total accuracy, and area under the receiver operating characteristic (ROC) curve criteria. Support vector machines showed the highest total accuracy (0.986) as well as area under the ROC (0.979). Also, this method showed high specificity (1.000) and sensitivity (0.820). All other methods produced total accuracy of more than 85%, but for all methods, the sensitivity values were very low (less than 0.350). The results of this study indicate that, in terms of sensitivity, specificity, and overall classification accuracy, the support vector machine model ranks first among all the classifiers tested in the prediction of diabetes. Therefore, this approach is a promising classifier for predicting diabetes, and it should be further investigated for the prediction of other diseases.
Time vs. Money: A Quantitative Evaluation of Monitoring Frequency vs. Monitoring Duration.
McHugh, Thomas E; Kulkarni, Poonam R; Newell, Charles J
2016-09-01
The National Research Council has estimated that over 126,000 contaminated groundwater sites are unlikely to achieve low ug/L clean-up goals in the foreseeable future. At these sites, cost-effective, long-term monitoring schemes are needed in order to understand the long-term changes in contaminant concentrations. Current monitoring optimization schemes rely on site-specific evaluations to optimize groundwater monitoring frequency. However, when using linear regression to estimate the long-term zero-order or first-order contaminant attenuation rate, the effect of monitoring frequency and monitoring duration on the accuracy and confidence for the estimated attenuation rate is not site-specific. For a fixed number of monitoring events, doubling the time between monitoring events (e.g., changing from quarterly monitoring to semi-annual monitoring) will double the accuracy of estimated attenuation rate. For a fixed monitoring frequency (e.g., semi-annual monitoring), increasing the number of monitoring events by 60% will double the accuracy of the estimated attenuation rate. Combining these two factors, doubling the time between monitoring events (e.g., quarterly monitoring to semi-annual monitoring) while decreasing the total number of monitoring events by 38% will result in no change in the accuracy of the estimated attenuation rate. However, the time required to collect this dataset will increase by 25%. Understanding that the trade-off between monitoring frequency and monitoring duration is not site-specific should simplify the process of optimizing groundwater monitoring frequency at contaminated groundwater sites. © 2016 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Accuracy assessment of linear spectral mixture model due to terrain undulation
NASA Astrophysics Data System (ADS)
Wang, Tianxing; Chen, Songlin; Ma, Ya
2008-12-01
Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be effectively used to evaluate different algorithms of terrain undulation correction for further study.
Diagnosis of Tempromandibular Disorders Using Local Binary Patterns.
Haghnegahdar, A A; Kolahi, S; Khojastepour, L; Tajeripour, F
2018-03-01
Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages.
Saravanan, Vijayakumar; Gautham, Namasivayam
2015-10-01
Proteins embody epitopes that serve as their antigenic determinants. Epitopes occupy a central place in integrative biology, not to mention as targets for novel vaccine, pharmaceutical, and systems diagnostics development. The presence of T-cell and B-cell epitopes has been extensively studied due to their potential in synthetic vaccine design. However, reliable prediction of linear B-cell epitope remains a formidable challenge. Earlier studies have reported discrepancy in amino acid composition between the epitopes and non-epitopes. Hence, this study proposed and developed a novel amino acid composition-based feature descriptor, Dipeptide Deviation from Expected Mean (DDE), to distinguish the linear B-cell epitopes from non-epitopes effectively. In this study, for the first time, only exact linear B-cell epitopes and non-epitopes have been utilized for developing the prediction method, unlike the use of epitope-containing regions in earlier reports. To evaluate the performance of the DDE feature vector, models have been developed with two widely used machine-learning techniques Support Vector Machine and AdaBoost-Random Forest. Five-fold cross-validation performance of the proposed method with error-free dataset and dataset from other studies achieved an overall accuracy between nearly 61% and 73%, with balance between sensitivity and specificity metrics. Performance of the DDE feature vector was better (with accuracy difference of about 2% to 12%), in comparison to other amino acid-derived features on different datasets. This study reflects the efficiency of the DDE feature vector in enhancing the linear B-cell epitope prediction performance, compared to other feature representations. The proposed method is made as a stand-alone tool available freely for researchers, particularly for those interested in vaccine design and novel molecular target development for systems therapeutics and diagnostics: https://github.com/brsaran/LBEEP.
NASA Astrophysics Data System (ADS)
Weisz, Elisabeth; Smith, William L.; Smith, Nadia
2013-06-01
The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.
Weaver, Tyler B; Ma, Christine; Laing, Andrew C
2017-02-01
The Nintendo Wii Balance Board (WBB) has become popular as a low-cost alternative to research-grade force plates. The purposes of this study were to characterize a series of technical specifications for the WBB, to compare balance control metrics derived from time-varying center of pressure (COP) signals collected simultaneously from a WBB and a research-grade force plate, and to investigate the effects of battery life. Drift, linearity, hysteresis, mass accuracy, uniformity of response, and COP accuracy were assessed from a WBB. In addition, 6 participants completed an eyes-closed quiet standing task on the WBB (at 3 battery life levels) mounted on a force plate while sway was simultaneously measured by both systems. Characterization results were all associated with less than 1% error. R 2 values reflecting WBB sensor linearity were > .99. Known and measured COP differences were lowest at the center of the WBB and greatest at the corners. Between-device differences in quiet stance COP summary metrics were of limited clinical significance. Lastly, battery life did not affect WBB COP accuracy, but did influence 2 of 8 quiet stance WBB parameters. This study provides general support for the WBB as a low-cost alternative to research-grade force plates for quantifying COP movement during standing.
Morphological Awareness and Children's Writing: Accuracy, Error, and Invention
McCutchen, Deborah; Stull, Sara
2014-01-01
This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade U.S. students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in spelling and errors of the sort we termed morphological inventions, which entailed inappropriate, novel pairings of stems and suffixes. Regressions were used to determine the relationship between morphological awareness, morphological accuracy, and spelling accuracy, as well as between morphological awareness and morphological inventions. Linear regressions revealed that morphological awareness uniquely predicted children's generation of accurate morphological derivations, regardless of whether or not accurate spelling was required. A logistic regression indicated that morphological awareness was also uniquely predictive of morphological invention, with higher morphological awareness increasing the probability of morphological invention. These findings suggest that morphological knowledge may not only assist children with spelling during writing, but may also assist with word production via generative experimentation with morphological rules during sentence generation. Implications are discussed for the development of children's morphological knowledge and relationships with writing. PMID:25663748
Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.
Erdem, Hamit
2010-10-01
Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Gordien, Jean-Baptiste; Pigneux, Arnaud; Vigouroux, Stephane; Tabrizi, Reza; Accoceberry, Isabelle; Bernadou, Jean-Marc; Rouault, Audrey; Saux, Marie-Claude; Breilh, Dominique
2009-12-05
A simple, specific and automatable HPLC assay was developed for a simultaneous determination of systemic azoles (fluconazole, posaconazole, voriconazole, itraconazole and its metabolite hydroxyl-itraconazole, and ketoconazole) in plasma. The major advantage of this assay was sample preparation by a fully automatable solid phase extraction with Varian Plexa cartridges. C6-phenyl column was used for chromatographic separation, and UV detection was set at a wavelength of 260 nm. Linezolid was used as an internal standard. The assay was specific and linear over the concentration range of 0.05 to 40 microg/ml excepted for fluconazole which was between 0.05 and 100 microg/ml, and itraconazole between 0.1 and 40 microg/ml. Validation data for accuracy and precision for intra- and inter-day were good and satisfied FDA's guidance: CV between 0.24% and 11.66% and accuracy between 93.8% and 108.7% for all molecules. This assay was applied to therapeutic drug monitoring on patients hospitalized in intensive care and onco-hematologic units.
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, A. V.; Ushenko, V. A.; Ushenko, Yu. A.; Sakhnovskiy, M. Y.; Pavlyukovich, O.; Pavlyukovich, N.; Novakovskaya, O.; Gorsky, M. P.
2016-09-01
The model of Mueller-matrix description of mechanisms of optical anisotropy that typical for polycrystalline layers of the histological sections of biological tissues and fluids - optical activity, birefringence, as well as linear and circular dichroism - is suggested. Within the statistical analysis distributions quantities of linear and circular birefringence and dichroism the objective criteria of differentiation of myocardium histological sections (determining the cause of death); films of blood plasma (liver pathology); peritoneal fluid (endometriosis of tissues of women reproductive sphere); urine (kidney disease) were determined. From the point of view of probative medicine the operational characteristics (sensitivity, specificity and accuracy) of the method of Mueller-matrix reconstruction of optical anisotropy parameters were found.
Ultrasonographic Fetal Weight Estimation: Should Macrosomia-Specific Formulas Be Utilized?
Porter, Blake; Neely, Cherry; Szychowski, Jeff; Owen, John
2015-08-01
This study aims to derive an estimated fetal weight (EFW) formula in macrosomic fetuses, compare its accuracy to the 1986 Hadlock IV formula, and assess whether including maternal diabetes (MDM) improves estimation. Retrospective review of nonanomalous live-born singletons with birth weight (BWT) ≥ 4 kg and biometry within 14 days of birth. Formula accuracy included: (1) mean error (ME = EFW - BWT), (2) absolute mean error (AME = absolute value of [1]), and (3) mean percent error (MPE, [1]/BWT × 100%). Using loge BWT as the dependent variable, multivariable linear regression produced a macrosomic-specific formula in a "training" dataset which was verified by "validation" data. Formulas specific for MDM were also developed. Out of the 403 pregnancies, birth gestational age was 39.5 ± 1.4 weeks, and median BWT was 4,240 g. The macrosomic formula from the training data (n = 201) had associated ME = 54 ± 284 g, AME = 234 ± 167 g, and MPE = 1.6 ± 6.2%; evaluation in the validation dataset (n = 202) showed similar errors. The Hadlock formula had associated ME = -369 ± 422 g, AME = 451 ± 332 g, MPE = -8.3 ± 9.3% (all p < 0.0001). Diabetes-specific formula errors were similar to the macrosomic formula errors (all p = NS). With BWT ≥ 4 kg, the macrosomic formula was significantly more accurate than Hadlock IV, which systematically underestimates fetal/BWT. Diabetes-specific formulas did not improve accuracy. A specific formula should be considered when macrosomia is suspected. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben
2013-11-01
Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Algorithm Estimates Microwave Water-Vapor Delay
NASA Technical Reports Server (NTRS)
Robinson, Steven E.
1989-01-01
Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.
NASA Astrophysics Data System (ADS)
Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu
2018-02-01
The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.
Luo, Ying-zhen; Tu, Meng; Fan, Fei; Zheng, Jie-qian; Yang, Ming; Li, Tao; Zhang, Kui; Deng, Zhen-hua
2015-06-01
To establish the linear regression equation between body height and combined length of manubrium and mesostenum of sternum measured by CT volume rendering technique (CT-VRT) in southwest Han population. One hundred and sixty subjects, including 80 males and 80 females were selected from southwest Han population for routine CT-VRT (reconstruction thickness 1 mm) examination. The lengths of both manubrium and mesosternum were recorded, and the combined length of manubrium and mesosternum was equal to the algebraic sum of them. The sex-specific linear regression equations between the combined length of manubrium and mesosternum and the real body height of each subject were deduced. The sex-specific simple linear regression equations between the combined length of manubrium and mesostenum (x3) and body height (y) were established (male: y = 135.000+2.118 x3 and female: y = 120.790+2.808 x3). Both equations showed statistical significance (P < 0.05) with a 100% predictive accuracy. CT-VRT is an effective method for measurement of the index of sternum. The combined length of manubrium and mesosternum from CT-VRT can be used for body height estimation in southwest Han population.
NASA Technical Reports Server (NTRS)
Jau, Bruno M.; McKinney, Colin; Smythe, Robert F.; Palmer, Dean L.
2011-01-01
An optical alignment mirror mechanism (AMM) has been developed with angular positioning accuracy of +/-0.2 arcsec. This requires the mirror s linear positioning actuators to have positioning resolutions of +/-112 nm to enable the mirror to meet the angular tip/tilt accuracy requirement. Demonstrated capabilities are 0.1 arc-sec angular mirror positioning accuracy, which translates into linear positioning resolutions at the actuator of 50 nm. The mechanism consists of a structure with sets of cross-directional flexures that enable the mirror s tip and tilt motion, a mirror with its kinematic mount, and two linear actuators. An actuator comprises a brushless DC motor, a linear ball screw, and a piezoelectric brake that holds the mirror s position while the unit is unpowered. An interferometric linear position sensor senses the actuator s position. The AMMs were developed for an Astrometric Beam Combiner (ABC) optical bench, which is part of an interferometer development. Custom electronics were also developed to accommodate the presence of multiple AMMs within the ABC and provide a compact, all-in-one solution to power and control the AMMs.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Wu, Lingtao; Lord, Dominique
2017-05-01
This study further examined the use of regression models for developing crash modification factors (CMFs), specifically focusing on the misspecification in the link function. The primary objectives were to validate the accuracy of CMFs derived from the commonly used regression models (i.e., generalized linear models or GLMs with additive linear link functions) when some of the variables have nonlinear relationships and quantify the amount of bias as a function of the nonlinearity. Using the concept of artificial realistic data, various linear and nonlinear crash modification functions (CM-Functions) were assumed for three variables. Crash counts were randomly generated based on these CM-Functions. CMFs were then derived from regression models for three different scenarios. The results were compared with the assumed true values. The main findings are summarized as follows: (1) when some variables have nonlinear relationships with crash risk, the CMFs for these variables derived from the commonly used GLMs are all biased, especially around areas away from the baseline conditions (e.g., boundary areas); (2) with the increase in nonlinearity (i.e., nonlinear relationship becomes stronger), the bias becomes more significant; (3) the quality of CMFs for other variables having linear relationships can be influenced when mixed with those having nonlinear relationships, but the accuracy may still be acceptable; and (4) the misuse of the link function for one or more variables can also lead to biased estimates for other parameters. This study raised the importance of the link function when using regression models for developing CMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.
2018-02-01
Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.
Accuracy and reliability of stitched cone-beam computed tomography images.
Egbert, Nicholas; Cagna, David R; Ahuja, Swati; Wicks, Russell A
2015-03-01
This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.
Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V
2000-08-01
We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
Phi, Xuan-Anh; Houssami, Nehmat; Hooning, Maartje J; Riedl, Christopher C; Leach, Martin O; Sardanelli, Francesco; Warner, Ellen; Trop, Isabelle; Saadatmand, Sepideh; Tilanus-Linthorst, Madeleine M A; Helbich, Thomas H; van den Heuvel, Edwin R; de Koning, Harry J; Obdeijn, Inge-Marie; de Bock, Geertruida H
2017-11-01
Women with a strong family history of breast cancer (BC) and without a known gene mutation have an increased risk of developing BC. We aimed to investigate the accuracy of screening using annual mammography with or without magnetic resonance imaging (MRI) for these women outside the general population screening program. An individual patient data (IPD) meta-analysis was conducted using IPD from six prospective screening trials that had included women at increased risk for BC: only women with a strong familial risk for BC and without a known gene mutation were included in this analysis. A generalised linear mixed model was applied to estimate and compare screening accuracy (sensitivity, specificity and predictive values) for annual mammography with or without MRI. There were 2226 women (median age: 41 years, interquartile range 35-47) with 7478 woman-years of follow-up, with a BC rate of 12 (95% confidence interval 9.3-14) in 1000 woman-years. Mammography screening had a sensitivity of 55% (standard error of mean [SE] 7.0) and a specificity of 94% (SE 1.3). Screening with MRI alone had a sensitivity of 89% (SE 4.6) and a specificity of 83% (SE 2.8). Adding MRI to mammography increased sensitivity to 98% (SE 1.8, P < 0.01 compared to mammography alone) but lowered specificity to 79% (SE 2.7, P < 0.01 compared with mammography alone). In this population of women with strong familial BC risk but without a known gene mutation, in whom BC incidence was high both before and after age 50, adding MRI to mammography substantially increased screening sensitivity but also decreased its specificity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diagnostic potential of Raman spectroscopy in Barrett's esophagus
NASA Astrophysics Data System (ADS)
Wong Kee Song, Louis-Michel; Molckovsky, Andrea; Wang, Kenneth K.; Burgart, Lawrence J.; Dolenko, Brion; Somorjai, Rajmund L.; Wilson, Brian C.
2005-04-01
Patients with Barrett's esophagus (BE) undergo periodic endoscopic surveillance with random biopsies in an effort to detect dysplastic or early cancerous lesions. Surveillance may be enhanced by near-infrared Raman spectroscopy (NIRS), which has the potential to identify endoscopically-occult dysplastic lesions within the Barrett's segment and allow for targeted biopsies. The aim of this study was to assess the diagnostic performance of NIRS for identifying dysplastic lesions in BE in vivo. Raman spectra (Pexc=70 mW; t=5 s) were collected from Barrett's mucosa at endoscopy using a custom-built NIRS system (λexc=785 nm) equipped with a filtered fiber-optic probe. Each probed site was biopsied for matching histological diagnosis as assessed by an expert pathologist. Diagnostic algorithms were developed using genetic algorithm-based feature selection and linear discriminant analysis, and classification was performed on all spectra with a bootstrap-based cross-validation scheme. The analysis comprised 192 samples (112 non-dysplastic, 54 low-grade dysplasia and 26 high-grade dysplasia/early adenocarcinoma) from 65 patients. Compared with histology, NIRS differentiated dysplastic from non-dysplastic Barrett's samples with 86% sensitivity, 88% specificity and 87% accuracy. NIRS identified 'high-risk' lesions (high-grade dysplasia/early adenocarcinoma) with 88% sensitivity, 89% specificity and 89% accuracy. In the present study, NIRS classified Barrett's epithelia with high and clinically-useful diagnostic accuracy.
Ananthula, Suryatheja; Janagam, Dileep R; Jamalapuram, Seshulatha; Johnson, James R; Mandrell, Timothy D; Lowe, Tao L
2015-10-15
Rapid, sensitive, selective and accurate LC/MS/MS method was developed for quantitative determination of levonorgestrel (LNG) in rat plasma and further validated for specificity, linearity, accuracy, precision, sensitivity, matrix effect, recovery efficiency and stability. Liquid-liquid extraction procedure using hexane:ethyl acetate mixture at 80:20 v:v ratio was employed to efficiently extract LNG from rat plasma. Reversed phase Luna column C18(2) (50×2.0mm i.d., 3μM) installed on a AB SCIEX Triple Quad™ 4500 LC/MS/MS system was used to perform chromatographic separation. LNG was identified within 2min with high specificity. Linear calibration curve was drawn within 0.5-50ng·mL(-1) concentration range. The developed method was validated for intra-day and inter-day accuracy and precision whose values fell in the acceptable limits. Matrix effect was found to be minimal. Recovery efficiency at three quality control (QC) concentrations 0.5 (low), 5 (medium) and 50 (high) ng·mL(-1) was found to be >90%. Stability of LNG at various stages of experiment including storage, extraction and analysis was evaluated using QC samples, and the results showed that LNG was stable at all the conditions. This validated method was successfully used to study the pharmacokinetics of LNG in rats after SubQ injection, providing its applicability in relevant preclinical studies. Copyright © 2015 Elsevier B.V. All rights reserved.
Blum, Emily S; Porras, Antonio R; Biggs, Elijah; Tabrizi, Pooneh R; Sussman, Rachael D; Sprague, Bruce M; Shalaby-Rana, Eglal; Majd, Massoud; Pohl, Hans G; Linguraru, Marius George
2017-10-21
We sought to define features that describe the dynamic information in diuresis renograms for the early detection of clinically significant hydronephrosis caused by ureteropelvic junction obstruction. We studied the diuresis renogram of 55 patients with a mean ± SD age of 75 ± 66 days who had congenital hydronephrosis at initial presentation. Five patients had bilaterally affected kidneys for a total of 60 diuresis renograms. Surgery was performed on 35 kidneys. We extracted 45 features based on curve shape and wavelet analysis from the drainage curves recorded after furosemide administration. The optimal features were selected as the combination that maximized the ROC AUC obtained from a linear support vector machine classifier trained to classify patients as with or without obstruction. Using these optimal features we performed leave 1 out cross validation to estimate the accuracy, sensitivity and specificity of our framework. Results were compared to those obtained using post-diuresis drainage half-time and the percent of clearance after 30 minutes. Our framework had 93% accuracy, including 91% sensitivity and 96% specificity, to predict surgical cases. This was a significant improvement over the same accuracy of 82%, including 71% sensitivity and 96% specificity obtained from half-time and 30-minute clearance using the optimal thresholds of 24.57 minutes and 55.77%, respectively. Our machine learning framework significantly improved the diagnostic accuracy of clinically significant hydronephrosis compared to half-time and 30-minute clearance. This aids in the clinical decision making process by offering a tool for earlier detection of severe cases and it has the potential to reduce the number of diuresis renograms required for diagnosis. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations
Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra
2014-01-01
An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052
NASA Astrophysics Data System (ADS)
Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.
2017-11-01
We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.
Optimization and qualification of an Fc Array assay for assessments of antibodies against HIV-1/SIV.
Brown, Eric P; Weiner, Joshua A; Lin, Shu; Natarajan, Harini; Normandin, Erica; Barouch, Dan H; Alter, Galit; Sarzotti-Kelsoe, Marcella; Ackerman, Margaret E
2018-04-01
The Fc Array is a multiplexed assay that assesses the Fc domain characteristics of antigen-specific antibodies with the potential to evaluate up to 500 antigen specificities simultaneously. Antigen-specific antibodies are captured on antigen-conjugated beads and their functional capacity is probed via an array of Fc-binding proteins including antibody subclassing reagents, Fcγ receptors, complement proteins, and lectins. Here we present the results of the optimization and formal qualification of the Fc Array, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. Assay conditions were optimized for performance and reproducibility, and the final version of the assay was then evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Assessment of sexual orientation using the hemodynamic brain response to visual sexual stimuli.
Ponseti, Jorge; Granert, Oliver; Jansen, Olav; Wolff, Stephan; Mehdorn, Hubertus; Bosinski, Hartmut; Siebner, Hartwig
2009-06-01
The assessment of sexual orientation is of importance to the diagnosis and treatment of sex offenders and paraphilic disorders. Phallometry is considered gold standard in objectifying sexual orientation, yet this measurement has been criticized because of its intrusiveness and limited reliability. To evaluate whether the spatial response pattern to sexual stimuli as revealed by a change in blood oxygen level-dependent (BOLD) signal can be used for individual classification of sexual orientation. We used a preexisting functional MRI (fMRI) data set that had been acquired in a nonclinical sample of 12 heterosexual men and 14 homosexual men. During fMRI, participants were briefly exposed to pictures of same-sex and opposite-sex genitals. Data analysis involved four steps: (i) differences in the BOLD response to female and male sexual stimuli were calculated for each subject; (ii) these contrast images were entered into a group analysis to calculate whole-brain difference maps between homosexual and heterosexual participants; (iii) a single expression value was computed for each subject expressing its correspondence to the group result; and (iv) based on these expression values, Fisher's linear discriminant analysis and the kappa-nearest neighbor classification method were used to predict the sexual orientation of each subject. Sensitivity and specificity of the two classification methods in predicting individual sexual orientation. Both classification methods performed well in predicting individual sexual orientation with a mean accuracy of >85% (Fisher's linear discriminant analysis: 92% sensitivity, 85% specificity; kappa-nearest neighbor classification: 88% sensitivity, 92% specificity). Despite the small sample size, the functional response patterns of the brain to sexual stimuli contained sufficient information to predict individual sexual orientation with high accuracy. These results suggest that fMRI-based classification methods hold promise for the diagnosis of paraphilic disorders (e.g., pedophilia).
Amen, Daniel G; Willeumier, Kristen; Omalu, Bennet; Newberg, Andrew; Raghavendra, Cauligi; Raji, Cyrus A
2016-04-25
National Football League (NFL) players are exposed to multiple head collisions during their careers. Increasing awareness of the adverse long-term effects of repetitive head trauma has raised substantial concern among players, medical professionals, and the general public. To determine whether low perfusion in specific brain regions on neuroimaging can accurately separate professional football players from healthy controls. A cohort of retired and current NFL players (n = 161) were recruited in a longitudinal study starting in 2009 with ongoing interval follow up. A healthy control group (n = 124) was separately recruited for comparison. Assessments included medical examinations, neuropsychological tests, and perfusion neuroimaging with single photon emission computed tomography (SPECT). Perfusion estimates of each scan were quantified using a standard atlas. We hypothesized that hypoperfusion particularly in the orbital frontal, anterior cingulate, anterior temporal, hippocampal, amygdala, insular, caudate, superior/mid occipital, and cerebellar sub-regions alone would reliably separate controls from NFL players. Cerebral perfusion differences were calculated using a one-way ANOVA and diagnostic separation was determined with discriminant and automatic linear regression predictive models. NFL players showed lower cerebral perfusion on average (p < 0.01) in 36 brain regions. The discriminant analysis subsequently distinguished NFL players from controls with 90% sensitivity, 86% specificity, and 94% accuracy (95% CI 95-99). Automatic linear modeling achieved similar results. Inclusion of age and clinical co-morbidities did not improve diagnostic classification. Specific brain regions commonly damaged in traumatic brain injury show abnormally low perfusion on SPECT in professional NFL players. These same regions alone can distinguish this group from healthy subjects with high diagnostic accuracy. This study carries implications for the neurological safety of NFL players.
Amen, Daniel G.; Willeumier, Kristen; Omalu, Bennet; Newberg, Andrew; Raghavendra, Cauligi; Raji, Cyrus A.
2016-01-01
Background: National Football League (NFL) players are exposed to multiple head collisions during their careers. Increasing awareness of the adverse long-term effects of repetitive head trauma has raised substantial concern among players, medical professionals, and the general public. Objective: To determine whether low perfusion in specific brain regions on neuroimaging can accurately separate professional football players from healthy controls. Method: A cohort of retired and current NFL players (n = 161) were recruited in a longitudinal study starting in 2009 with ongoing interval follow up. A healthy control group (n = 124) was separately recruited for comparison. Assessments included medical examinations, neuropsychological tests, and perfusion neuroimaging with single photon emission computed tomography (SPECT). Perfusion estimates of each scan were quantified using a standard atlas. We hypothesized that hypoperfusion particularly in the orbital frontal, anterior cingulate, anterior temporal, hippocampal, amygdala, insular, caudate, superior/mid occipital, and cerebellar sub-regions alone would reliably separate controls from NFL players. Cerebral perfusion differences were calculated using a one-way ANOVA and diagnostic separation was determined with discriminant and automatic linear regression predictive models. Results: NFL players showed lower cerebral perfusion on average (p < 0.01) in 36 brain regions. The discriminant analysis subsequently distinguished NFL players from controls with 90% sensitivity, 86% specificity, and 94% accuracy (95% CI 95-99). Automatic linear modeling achieved similar results. Inclusion of age and clinical co-morbidities did not improve diagnostic classification. Conclusion: Specific brain regions commonly damaged in traumatic brain injury show abnormally low perfusion on SPECT in professional NFL players. These same regions alone can distinguish this group from healthy subjects with high diagnostic accuracy. This study carries implications for the neurological safety of NFL players. PMID:27128374
Phytonadione Content in Branded Intravenous Fat Emulsions.
Forchielli, Maria Luisa; Conti, Matteo; Motta, Roberto; Puggioli, Cristina; Bersani, Germana
2017-03-01
Intravenous fat emulsions (IVFE) with different fatty acid compositions contain vitamin E as a by-product of vegetable and animal oil during the refining processes. Likewise, other lipid-soluble vitamins may be present in IVFE. No data, however, exist about phytonadione (vitamin K1) concentration in IVFE information leaflets. Therefore, our aim was to evaluate the phytonadione content in different IVFE. Analyses were carried out in triplicate on 6 branded IVFE as follows: 30% soybean oil (100%), 20% olive-soybean oil (80%-20%), 20% soybean-medium-chain triglycerides (MCT) coconut oil (50%-50%), 20% soybean-olive-MCT-fish oil (30%-25%-30%-15%), 20% soybean-MCT-fish oil (40%-50%-10%), and 10% pure fish oil (100%). Phytonadione was analyzed and quantified by a quali-quantitative liquid chromatography-mass spectrometry (LC-MS) method after its extraction from the IVFE by an isopropyl alcohol-hexane mixture, reverse phase-liquid chromatography, and specific multiple-reaction monitoring for phytonadione and vitamin d3 (as internal standard). This method was validated through specificity, linearity, and accuracy. Average vitamin K1 content was 500, 100, 90, 100, 95, and 70 µg/L in soybean oil, olive-soybean oil, soybean-MCT coconut oil, soybean-olive-MCT-fish oil, soybean-MCT-fish oil, and pure fish oil intravenous lipid emulsions (ILEs), respectively. The analytical LC-MS method was extremely effective in terms of specificity, linearity ( r = 0.99), and accuracy (coefficient of variation <5%). Phytonadione is present in IVFE, and its intake varies according to IVFE type and the volume administered. It can contribute to daily requirements and become clinically relevant when simultaneously infused with multivitamins during long-term parenteral nutrition. LC-MS seems adequate in assessing vitamin K1 intake in IVFE.
Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias
2014-11-01
Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.
Cheng, Y; Cai, Y; Wang, Y
2014-01-01
The aim of this study was to assess the accuracy of ultrasonography in the diagnosis of chronic lateral ankle ligament injury. A total of 120 ankles in 120 patients with a clinical suspicion of chronic ankle ligament injury were examined by ultrasonography by using a 5- to 17-MHz linear array transducer before surgery. The results of ultrasonography were compared with the operative findings. There were 18 sprains and 24 partial and 52 complete tears of the anterior talofibular ligament (ATFL); 26 sprains, 27 partial and 12 complete tears of the calcaneofibular ligament (CFL); and 1 complete tear of the posterior talofibular ligament (PTFL) at arthroscopy and operation. Compared with operative findings, the sensitivity, specificity and accuracy of ultrasonography were 98.9%, 96.2% and 84.2%, respectively, for injury of the ATFL and 93.8%, 90.9% and 83.3%, respectively, for injury of the CFL. The PTFL tear was identified by ultrasonography. The accuracy of identification between acute-on-chronic and subacute-chronic patients did not differ. The accuracies of diagnosing three grades of ATFL injuries were almost the same as those of diagnosing CFL injuries. Ultrasonography provides useful information for the evaluation of patients presenting with chronic pain after ankle sprain. Intraoperative findings are the reference standard. We demonstrated that ultrasonography was highly sensitive and specific in detecting chronic lateral ligments injury of the ankle joint.
Cheng, Y; Cai, Y
2014-01-01
Objective: The aim of this study was to assess the accuracy of ultrasonography in the diagnosis of chronic lateral ankle ligament injury. Methods: A total of 120 ankles in 120 patients with a clinical suspicion of chronic ankle ligament injury were examined by ultrasonography by using a 5- to 17-MHz linear array transducer before surgery. The results of ultrasonography were compared with the operative findings. Results: There were 18 sprains and 24 partial and 52 complete tears of the anterior talofibular ligament (ATFL); 26 sprains, 27 partial and 12 complete tears of the calcaneofibular ligament (CFL); and 1 complete tear of the posterior talofibular ligament (PTFL) at arthroscopy and operation. Compared with operative findings, the sensitivity, specificity and accuracy of ultrasonography were 98.9%, 96.2% and 84.2%, respectively, for injury of the ATFL and 93.8%, 90.9% and 83.3%, respectively, for injury of the CFL. The PTFL tear was identified by ultrasonography. The accuracy of identification between acute-on-chronic and subacute–chronic patients did not differ. The accuracies of diagnosing three grades of ATFL injuries were almost the same as those of diagnosing CFL injuries. Conclusion: Ultrasonography provides useful information for the evaluation of patients presenting with chronic pain after ankle sprain. Advances in knowledge: Intraoperative findings are the reference standard. We demonstrated that ultrasonography was highly sensitive and specific in detecting chronic lateral ligments injury of the ankle joint. PMID:24352708
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Cross-beam energy transfer: On the accuracy of linear stationary models in the linear kinetic regime
NASA Astrophysics Data System (ADS)
Debayle, A.; Masson-Laborde, P.-E.; Ruyer, C.; Casanova, M.; Loiseau, P.
2018-05-01
We present an extensive numerical study by means of particle-in-cell simulations of the energy transfer that occurs during the crossing of two laser beams. In the linear regime, when ions are not trapped in the potential well induced by the laser interference pattern, a very good agreement is obtained with a simple linear stationary model, provided the laser intensity is sufficiently smooth. These comparisons include different plasma compositions to cover the strong and weak Landau damping regimes as well as the multispecies case. The correct evaluation of the linear Landau damping at the phase velocity imposed by the laser interference pattern is essential to estimate the energy transfer rate between the laser beams, once the stationary regime is reached. The transient evolution obtained in kinetic simulations is also analysed by means of a full analytical formula that includes 3D beam energy exchange coupled with the ion acoustic wave response. Specific attention is paid to the energy transfer when the laser presents small-scale inhomogeneities. In particular, the energy transfer is reduced when the laser inhomogeneities are comparable with the Landau damping characteristic length of the ion acoustic wave.
NASA Technical Reports Server (NTRS)
Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)
1992-01-01
Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-30
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-01
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316
Relationship Between Motor Variability, Accuracy, and Ball Speed in the Tennis Serve
Antúnez, Ruperto Menayo; Hernández, Francisco Javier Moreno; García, Juan Pedro Fuentes; Vaíllo, Raúl Reina; Arroyo, Jesús Sebastián Damas
2012-01-01
The main objective of this study was to analyze the motor variability in the performance of the tennis serve and its relationship to performance outcome. Seventeen male tennis players took part in the research, and they performed 20 serves. Linear and non-linear variability during the hand movement was measured by 3D Motion Tracking. Ball speed was recorded with a sports radar gun and the ball bounces were video recorded to calculate accuracy. The results showed a relationship between the amount of variability and its non-linear structure found in performance of movement and the outcome of the serve. The study also found that movement predictability correlates with performance. An increase in the amount of movement variability could affect the tennis serve performance in a negative way by reducing speed and accuracy of the ball. PMID:23486998
Diagnosis of Tempromandibular Disorders Using Local Binary Patterns
Haghnegahdar, A.A.; Kolahi, S.; Khojastepour, L.; Tajeripour, F.
2018-01-01
Background: Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. Material and Methods: CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. Results: K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. Conclusion: We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages. PMID:29732343
Lee, Ji Hyun; Kang, Gihaeng; Park, Han Na; Kim, Jihee; Kim, Nam Sook; Park, Seongsoo; Park, Sung-Kwan; Baek, Sun Young; Kang, Hoil
2018-02-01
In this study, we developed a UPLC-PDA and LC-Q-TOF/MS method to identify and measure the following prohibited substances that may be found in dietary supplements:triaminodil, minoxidil, bimatoprost, alimemazine, diphenylcyclopropenone, α-tradiol, finasteride, methyltestosterone, spironolatone, flutamide, cyproterone, dutasteride, and testosterone 17-propionate.The method was validated according to International Conference on Harmonization guidelines in terms of specificity, linearity, accuracy, precision, LOD, LOQ, recovery, and stability. The method was completely validated showing satisfactory data for all method validation parameters. The linearity was good (R 2 > 0.999) with intra- and inter-day precision values of 0.2-3.4% and 0.3-2.9%, respectively. Moreover, the intra- and inter-day accuracies were 87-102% and 86-103%, respectively, and the precision was better than 9.4% (relative standard deviation).Hence, the proposed method is precise and has high quality,and can be utilised to comprehensively and continually monitor illegal drug adulteration in various forms of dietary supplements. Furthermore, to evaluate the applicability of the proposed method, we analysed 13 hair-growth compounds in 78 samples including food and dietary supplements. Minoxidil and triaminodil were detected in capsules at concentrations of 4.69 mg/g and 6.54 mg/g. In addition, finasteride was detected in a tablet at 13.45 mg/g. In addition, the major characteristic fragment ions were confirmed once again using LC-Q-TOF/MS for higher accuracy.
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.
Accuracy and reliability of stitched cone-beam computed tomography images
Egbert, Nicholas; Cagna, David R.; Wicks, Russell A.
2015-01-01
Purpose This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Materials and Methods Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. Results The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. Conclusion The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets. PMID:25793182
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...) Data recording, calculations, and reporting; (v) Accuracy audit procedures, including sampling and...
Solving Nonlinear Euler Equations with Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2005-01-01
A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.
A Technique of Treating Negative Weights in WENO Schemes
NASA Technical Reports Server (NTRS)
Shi, Jing; Hu, Changqing; Shu, Chi-Wang
2000-01-01
High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.
A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.
Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua
2017-01-01
Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
Evaluation of the Linear Aerospike SR-71 Experiment (LASRE) Oxygen Sensor
NASA Technical Reports Server (NTRS)
Ennix, Kimberly A.; Corpening, Griffin P.; Jarvis, Michele; Chiles, Harry R.
1999-01-01
The Linear Aerospike SR-71 Experiment (LASRE) was a propulsion flight experiment for advanced space vehicles such as the X-33 and reusable launch vehicle. A linear aerospike rocket engine was integrated into a semi-span of an X-33-like lifting body shape (model), and carried on top of an SR-71 aircraft at NASA Dryden Flight Research Center. Because no flight data existed for aerospike nozzles, the primary objective of the LASRE flight experiment was to evaluate flight effects on the engine performance over a range of altitudes and Mach numbers. Because it contained a large quantity of energy in the form of fuel, oxidizer, hypergolics, and gases at very high pressures, the LASRE propulsion system posed a major hazard for fire or explosion. Therefore, a propulsion-hazard mitigation system was created for LASRE that included a nitrogen purge system. Oxygen sensors were a critical part of the nitrogen purge system because they measured purge operation and effectiveness. Because the available oxygen sensors were not designed for flight testing, a laboratory study investigated oxygen-sensor characteristics and accuracy over a range of altitudes and oxygen concentrations. Laboratory test data made it possible to properly calibrate the sensors for flight. Such data also provided a more accurate error prediction than the manufacturer's specification. This predictive accuracy increased confidence in the sensor output during critical phases of the flight. This paper presents the findings of this laboratory test.
Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K
2015-01-01
In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun
1993-01-01
The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.
Temporal Lobe Epilepsy: Quantitative MR Volumetry in Detection of Hippocampal Atrophy
Farid, Nikdokht; Girard, Holly M.; Kemmotsu, Nobuko; Smith, Michael E.; Magda, Sebastian W.; Lim, Wei Y.; Lee, Roland R.
2012-01-01
Purpose: To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). Materials and Methods: This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration–cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Results: Quantitative MR imaging–derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%–89.5%) and specificity (92.2%–94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Conclusion: Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112638/-/DC1 PMID:22723496
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Ex vivo characterization of normal and adenocarcinoma colon samples by Mueller matrix polarimetry.
Ahmad, Iftikhar; Ahmad, Manzoor; Khan, Karim; Ashraf, Sumara; Ahmad, Shakil; Ikram, Masroor
2015-05-01
Mueller matrix polarimetry along with polar decomposition algorithm was employed for the characterization of ex vivo normal and adenocarcinoma human colon tissues by polarized light in the visible spectral range (425-725 nm). Six derived polarization metrics [total diattenuation (DT ), retardance (RT ), depolarization(ΔT ), linear diattenuation (DL), retardance (δ), and depolarization (ΔL)] were compared for normal and adenocarcinoma colon tissue samples. The results show that all six polarimetric properties for adenocarcinoma samples were significantly higher as compared to the normal samples for all wavelengths. The Wilcoxon rank sum test illustrated that total retardance is a good candidate for the discrimination of normal and adenocarcinoma colon samples. Support vector machine classification for normal and adenocarcinoma based on the four polarization properties spectra (ΔT , ΔL, RT ,and δ) yielded 100% accuracy, sensitivity, and specificity, while both DTa nd DL showed 66.6%, 33.3%, and 83.3% accuracy, sensitivity, and specificity, respectively. The combination of polarization analysis and given classification methods provides a framework to distinguish the normal and cancerous tissues.
Prostate lesion detection and localization based on locality alignment discriminant analysis
NASA Astrophysics Data System (ADS)
Lin, Mingquan; Chen, Weifu; Zhao, Mingbo; Gibson, Eli; Bastian-Jordan, Matthew; Cool, Derek W.; Kassam, Zahra; Chow, Tommy W. S.; Ward, Aaron; Chiu, Bernard
2017-03-01
Prostatic adenocarcinoma is one of the most commonly occurring cancers among men in the world, and it also the most curable cancer when it is detected early. Multiparametric MRI (mpMRI) combines anatomic and functional prostate imaging techniques, which have been shown to produce high sensitivity and specificity in cancer localization, which is important in planning biopsies and focal therapies. However, in previous investigations, lesion localization was achieved mainly by manual segmentation, which is time-consuming and prone to observer variability. Here, we developed an algorithm based on locality alignment discriminant analysis (LADA) technique, which can be considered as a version of linear discriminant analysis (LDA) localized to patches in the feature space. Sensitivity, specificity and accuracy generated by the proposed algorithm in five prostates by LADA were 52.2%, 89.1% and 85.1% respectively, compared to 31.3%, 85.3% and 80.9% generated by LDA. The delineation accuracy attainable by this tool has a potential in increasing the cancer detection rate in biopsies and in minimizing collateral damage of surrounding tissues in focal therapies.
Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2014-05-21
Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liquid electrolyte informatics using an exhaustive search with linear regression.
Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato
2018-06-14
Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.
Prediction of siRNA potency using sparse logistic regression.
Hu, Wei; Hu, John
2014-06-01
RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design.
Lemieux, Sébastien
2006-08-25
The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
Linear signal noise summer accurately determines and controls S/N ratio
NASA Technical Reports Server (NTRS)
Sundry, J. L.
1966-01-01
Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.
Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking
ERIC Educational Resources Information Center
Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas
2012-01-01
This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
2013-01-01
Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042
Detection of mental stress due to oral academic examination via ultra-short-term HRV analysis.
Castaldo, R; Xu, W; Melillo, P; Pecchia, L; Santamaria, L; James, C
2016-08-01
Mental stress may cause cognitive dysfunctions, cardiovascular disorders and depression. Mental stress detection via short-term Heart Rate Variability (HRV) analysis has been widely explored in the last years, while ultra-short term (less than 5 minutes) HRV has been not. This study aims to detect mental stress using linear and non-linear HRV features extracted from 3 minutes ECG excerpts recorded from 42 university students, during oral examination (stress) and at rest after a vacation. HRV features were then extracted and analyzed according to the literature using validated software tools. Statistical and data mining analysis were then performed on the extracted HRV features. The best performing machine learning method was the C4.5 tree algorithm, which discriminated between stress and rest with sensitivity, specificity and accuracy rate of 78%, 80% and 79% respectively.
A precision isotonic measuring system for isolated tissues.
Mellor, P M
1984-12-01
An isotonic measuring system is described which utilizes an angular position transducer of the linear differential voltage transformer type. Resistance to corrosion, protection against the ingress of solutions, and ease of mounting and setting up were the mechanical objectives. Accuracy, linearity, and freedom from drift were essential requirements of the electrical specification. A special housing was designed to accommodate the transducer to overcome these problems. A control unit incorporating a power supply and electronic filtering components was made to serve up to four such transducers. The transducer output voltage is sufficiently high to drive directly even low sensitivity chart recorders. Constructional details and a circuit diagram are included. Fifty such transducers have been in use for up to four years in these laboratories. Examples of some of the published work done using this transducer system are referenced.
Verification of spectrophotometric method for nitrate analysis in water samples
NASA Astrophysics Data System (ADS)
Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu
2017-12-01
The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.
NASA Astrophysics Data System (ADS)
Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman
2017-07-01
This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.
Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme
2010-01-01
The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.
Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme
2010-01-01
The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641
Semenova, Vera A.; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad
2017-01-01
To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r2), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from −4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r2 = 0.952, slope = 1.02 and intercept = −0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. PMID:27814939
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
Semenova, Vera A; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad
2017-01-01
To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r 2 ), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from -4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r 2 = 0.952, slope = 1.02 and intercept = -0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. Published by Elsevier Ltd.
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju
2015-09-01
Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer's instructions. Dimensional changes were measured before and after different disinfection procedures. Dentsply aquasil showed smallest dimensional change (-0.0046%) and impregum penta soft highest linear dimensional changes (-0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method.
Validation assessment of shoreline extraction on medium resolution satellite image
NASA Astrophysics Data System (ADS)
Manaf, Syaifulnizam Abd; Mustapha, Norwati; Sulaiman, Md Nasir; Husin, Nor Azura; Shafri, Helmi Zulhaidi Mohd
2017-10-01
Monitoring coastal zones helps provide information about the conditions of the coastal zones, such as erosion or accretion. Moreover, monitoring the shorelines can help measure the severity of such conditions. Such measurement can be performed accurately by using Earth observation satellite images rather than by using traditional ground survey. To date, shorelines can be extracted from satellite images with a high degree of accuracy by using satellite image classification techniques based on machine learning to identify the land and water classes of the shorelines. In this study, the researchers validated the results of extracted shorelines of 11 classifiers using a reference shoreline provided by the local authority. Specifically, the validation assessment was performed to examine the difference between the extracted shorelines and the reference shorelines. The research findings showed that the SVM Linear was the most effective image classification technique, as evidenced from the lowest mean distance between the extracted shoreline and the reference shoreline. Furthermore, the findings showed that the accuracy of the extracted shoreline was not directly proportional to the accuracy of the image classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less
Microbiological assay for the determination of meropenem in pharmaceutical dosage form.
Mendez, Andreas S L; Weisheimer, Vanessa; Oppe, Tércio P; Steppe, Martin; Schapoval, Elfrides E S
2005-04-01
Meropenem is a highly active carbapenem antibiotic used in the treatment of a wide range of serious infections. The present work reports a microbiological assay, applying the cylinder-plate method, for the determination of meropenem in powder for injection. The validation method yielded good results and included linearity, precision, accuracy and specificity. The assay is based on the inhibitory effect of meropenem upon the strain of Micrococcus luteus ATCC 9341 used as the test microorganism. The results of assay were treated statistically by analysis of variance (ANOVA) and were found to be linear (r=0.9999) in the range of 1.5-6.0 microg ml(-1), precise (intra-assay: R.S.D.=0.29; inter-assay: R.S.D.=0.94) and accurate. A preliminary stability study of meropenem was performed to show that the microbiological assay is specific for the determination of meropenem in the presence of its degradation products. The degraded samples were also analysed by the HPLC method. The proposed method allows the quantitation of meropenem in pharmaceutical dosage form and can be used for the drug analysis in routine quality control.
Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M
2014-09-15
Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.
2014-09-01
Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.
Multi-site evaluation of IKONOS data for classification of tropical coral reef environments
Andrefouet, S.; Kramer, Philip; Torres-Pulliza, D.; Joyce, K.E.; Hochberg, E.J.; Garza-Perez, R.; Mumby, P.J.; Riegl, Bernhard; Yamano, H.; White, W.H.; Zubia, M.; Brock, J.C.; Phinn, S.R.; Naseer, A.; Hatcher, B.G.; Muller-Karger, F. E.
2003-01-01
Ten IKONOS images of different coral reef sites distributed around the world were processed to assess the potential of 4-m resolution multispectral data for coral reef habitat mapping. Complexity of reef environments, established by field observation, ranged from 3 to 15 classes of benthic habitats containing various combinations of sediments, carbonate pavement, seagrass, algae, and corals in different geomorphologic zones (forereef, lagoon, patch reef, reef flats). Processing included corrections for sea surface roughness and bathymetry, unsupervised or supervised classification, and accuracy assessment based on ground-truth data. IKONOS classification results were compared with classified Landsat 7 imagery for simple to moderate complexity of reef habitats (5-11 classes). For both sensors, overall accuracies of the classifications show a general linear trend of decreasing accuracy with increasing habitat complexity. The IKONOS sensor performed better, with a 15-20% improvement in accuracy compared to Landsat. For IKONOS, overall accuracy was 77% for 4-5 classes, 71% for 7-8 classes, 65% in 9-11 classes, and 53% for more than 13 classes. The Landsat classification accuracy was systematically lower, with an average of 56% for 5-10 classes. Within this general trend, inter-site comparisons and specificities demonstrate the benefits of different approaches. Pre-segmentation of the different geomorphologic zones and depth correction provided different advantages in different environments. Our results help guide scientists and managers in applying IKONOS-class data for coral reef mapping applications. ?? 2003 Elsevier Inc. All rights reserved.
Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George
2018-01-01
Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.
Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M
2015-05-01
The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.
Real-time In vivo Diagnosis of Nasopharyngeal Carcinoma Using Rapid Fiber-Optic Raman Spectroscopy.
Lin, Kan; Zheng, Wei; Lim, Chwee Ming; Huang, Zhiwei
2017-01-01
We report the utility of a simultaneous fingerprint (FP) (i.e., 800-1800 cm -1 ) and high-wavenumber (HW) (i.e., 2800-3600 cm -1 ) fiber-optic Raman spectroscopy developed for real-time in vivo diagnosis of nasopharyngeal carcinoma (NPC) at endoscopy. A total of 3731 high-quality in vivo FP/HW Raman spectra (normal=1765; cancer=1966) were acquired in real-time from 204 tissue sites (normal=95; cancer=109) of 95 subjects (normal=57; cancer=38) undergoing endoscopic examination. FP/HW Raman spectra differ significantly between normal and cancerous nasopharyngeal tissues that could be attributed to changes of proteins, lipids, nucleic acids, and the bound water content in NPC. Principal components analysis (PCA) and linear discriminant analysis (LDA) together with leave-one subject-out, cross-validation (LOO-CV) were implemented to develop robust Raman diagnostic models. The simultaneous FP/HW Raman spectroscopy technique together with PCA-LDA and LOO-CV modeling provides a diagnostic accuracy of 93.1% (sensitivity of 93.6%; specificity of 92.6%) for nasopharyngeal cancer identification, which is superior to using either FP (accuracy of 89.2%; sensitivity of 89.9%; specificity of 88.4%) or HW (accuracy of 89.7%; sensitivity of 89.0%; specificity of 90.5%) Raman technique alone. Further receiver operating characteristic (ROC) analysis reconfirms the best performance of the simultaneous FP/HW Raman technique for in vivo diagnosis of NPC. This work demonstrates for the first time that simultaneous FP/HW fiber-optic Raman spectroscopy technique has great promise for enhancing real-time in vivo cancer diagnosis in the nasopharynx during endoscopic examination.
Direct Linear Transformation Method for Three-Dimensional Cinematography
ERIC Educational Resources Information Center
Shapiro, Robert
1978-01-01
The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)
Accuracy of visual inspection performed by community health workers in cervical cancer screening.
Driscoll, Susan D; Tappen, Ruth M; Newman, David; Voege-Harvey, Kathi
2018-05-22
Cervical cancer remains the leading cause of cancer and mortality in low-resource areas with healthcare personnel shortages. Visual inspection is a low-resource alternative method of cervical cancer screening in areas with limited access to healthcare. To assess accuracy of visual inspection performed by community health workers (CHWs) and licensed providers, and the effect of provider training on visual inspection accuracy. Five databases and four websites were queried for studies published in English up to December 31, 2015. Derivations of "cervical cancer screening" and "visual inspection" were search terms. Visual inspection screening studies with provider definitions, colposcopy reference standards, and accuracy data were included. A priori variables were extracted by two independent reviewers. Bivariate linear mixed-effects models were used to compare visual inspection accuracy. Provider type was a significant predictor of visual inspection sensitivity (P=0.048); sensitivity was 15 percentage points higher among CHWs than physicians (P=0.014). Components of provider training were significant predictors of sensitivity and specificity. Community-based visual inspection programs using adequately trained CHWs could reduce barriers and expand access to screening, thereby decreasing cervical cancer incidence and mortality for women at highest risk and those living in remote areas with limited access to healthcare personnel. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Mandibular canine: A tool for sex identification in forensic odontology.
Kumawat, Ramniwas M; Dindgire, Sarika L; Gadhari, Mangesh; Khobragade, Pratima G; Kadoo, Priyanka S; Yadav, Pradeep
2017-01-01
The aim of this study was to investigate the accuracy of mandibular canine index (MCI) and mandibular mesiodistal odontometrics in sex identification in the age group of 17-25 years in central Indian population. The study sample comprised total 300 individuals (150 males and 150 females) of an age group ranging from 17 to 25 years of central Indian population. The maximum mesiodistal diameter of mandibular canines, the linear distance between the tips of mandibular canines, was measured using digital vernier caliper on the study models. Overall sex could be predicted accurately in 79.66% (81.33% males and 78% females) of the population by MCI. Whereas, considering the mandibular canine width for sex identification, the overall accuracy was 75% for the right mandibular canine and 73% for the left mandibular canine observed. Sexual dimorphism of canine is population specific, and among the Indian population, MCI and mesiodistal dimension of mandibular canine can aid in sex determination.
Patil, Suyog S; Srivastava, Ashwini K
2013-01-01
A simple, precise, and rapid RPLC method has been developed without incorporation of any ion-pair reagent for the simultaneous determination of vitamin C (C) and seven B-complex vitamins, viz, thiamine hydrochloride (B1), pyridoxine hydrochloride (B6), nicotinamide (B3), cyanocobalamine (B12), folic acid, riboflavin (B2), and 4-aminobenzoic acid (Bx). Separations were achieved within 12.0 min at 30 degrees C by gradient elution on an RP C18 column using a mobile phase consisting of a mixture of 15 mM ammonium formate buffer and 0.1% triethylamine adjusted to pH 4.0 with formic acid and acetonitrile. Simultaneous UV detection was performed at 275 and 360 nm. The method was validated for system suitability, LOD, LOQ, linearity, precision, accuracy, specificity, and robustness in accordance with International Conference on Harmonization guidelines. The developed method was implemented successfully for determination of the aforementioned vitamins in pharmaceutical formulations containing an individual vitamin, in their multivitamin combinations, and in human urine samples. The calibration curves for all analytes showed good linearity, with coefficients of correlation higher than 0.9998. Accuracy, intraday repeatability (n = 6), and interday repeatability (n = 7) were found to be satisfactory.
Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo
2013-01-01
A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778
Geometric accuracy of Landsat-4 and Landsat-5 Thematic Mapper images.
Borgeson, W.T.; Batson, R.M.; Kieffer, H.H.
1985-01-01
The geometric accuracy of the Landsat Thematic Mappers was assessed by a linear least-square comparison of the positions of conspicuous ground features in digital images with their geographic locations as determined from 1:24 000-scale maps. For a Landsat-5 image, the single-dimension standard deviations of the standard digital product, and of this image with additional linear corrections, are 11.2 and 10.3 m, respectively (0.4 pixel). An F-test showed that skew and affine distortion corrections are not significant. At this level of accuracy, the granularity of the digital image and the probable inaccuracy of the 1:24 000 maps began to affect the precision of the comparison. The tested image, even with a moderate accuracy loss in the digital-to-graphic conversion, meets National Horizontal Map Accuracy standards for scales of 1:100 000 and smaller. Two Landsat-4 images, obtained with the Multispectral Scanner on and off, and processed by an interim software system, contain significant skew and affine distortions. -Authors
Johansson, Magnus; Zhang, Jingji; Ehrenberg, Måns
2012-01-03
Rapid and accurate translation of the genetic code into protein is fundamental to life. Yet due to lack of a suitable assay, little is known about the accuracy-determining parameters and their correlation with translational speed. Here, we develop such an assay, based on Mg(2+) concentration changes, to determine maximal accuracy limits for a complete set of single-mismatch codon-anticodon interactions. We found a simple, linear trade-off between efficiency of cognate codon reading and accuracy of tRNA selection. The maximal accuracy was highest for the second codon position and lowest for the third. The results rationalize the existence of proofreading in code reading and have implications for the understanding of tRNA modifications, as well as of translation error-modulating ribosomal mutations and antibiotics. Finally, the results bridge the gap between in vivo and in vitro translation and allow us to calibrate our test tube conditions to represent the environment inside the living cell.
Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.
2011-01-01
The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.
NASA Technical Reports Server (NTRS)
Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc
2014-01-01
In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron
Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.
Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less
Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.; ...
2017-12-01
Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less
Accuracy of analytic energy level formulas applied to hadronic spectroscopy of heavy mesons
NASA Technical Reports Server (NTRS)
Badavi, Forooz F.; Norbury, John W.; Wilson, John W.; Townsend, Lawrence W.
1988-01-01
Linear and harmonic potential models are used in the nonrelativistic Schroedinger equation to obtain article mass spectra for mesons as bound states of quarks. The main emphasis is on the linear potential where exact solutions of the S-state eigenvalues and eigenfunctions and the asymptotic solution for the higher order partial wave are obtained. A study of the accuracy of two analytical energy level formulas as applied to heavy mesons is also included. Cornwall's formula is found to be particularly accurate and useful as a predictor of heavy quarkonium states. Exact solution for all partial waves of eigenvalues and eigenfunctions for a harmonic potential is also obtained and compared with the calculated discrete spectra of the linear potential. Detailed derivations of the eigenvalues and eigenfunctions of the linear and harmonic potentials are presented in appendixes.
Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics
Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris
2016-01-01
As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314
Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2014-10-09
Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.
Costanzi, Stefano; Skorski, Matthew; Deplano, Alessandro; Habermehl, Brett; Mendoza, Mary; Wang, Keyun; Biederman, Michelle; Dawson, Jessica; Gao, Jia
2016-11-01
With the present work we quantitatively studied the modellability of the inactive state of Class A G protein-coupled receptors (GPCRs). Specifically, we constructed models of one of the Class A GPCRs for which structures solved in the inactive state are available, namely the β 2 AR, using as templates each of the other class members for which structures solved in the inactive state are also available. Our results showed a detectable linear correlation between model accuracy and model/template sequence identity. This suggests that the likely accuracy of the homology models that can be built for a given receptor can be generally forecasted on the basis of the available templates. We also probed whether sequence alignments that allow for the presence of gaps within the transmembrane domains to account for structural irregularities afford better models than the classical alignment procedures that do not allow for the presence of gaps within such domains. As our results indicated, although the overall differences are very subtle, the inclusion of internal gaps within the transmembrane domains has a noticeable a beneficial effect on the local structural accuracy of the domain in question. Copyright © 2016 Elsevier Inc. All rights reserved.
Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru
2018-04-17
We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.
Development of a Bolometer Detector System for the NIST High Accuracy Infrared Spectrophotometer
Zong, Y.; Datla, R. U.
1998-01-01
A bolometer detector system was developed for the high accuracy infrared spectrophotometer at the National Institute of Standards and Technology to provide maximum sensitivity, spatial uniformity, and linearity of response covering the entire infrared spectral range. The spatial response variation was measured to be within 0.1 %. The linearity of the detector output was measured over three decades of input power. After applying a simple correction procedure, the detector output was found to deviate less than 0.2 % from linear behavior over this range. The noise equivalent power (NEP) of the bolometer system was 6 × 10−12 W/Hz at the frequency of 80 Hz. The detector output 3 dB roll-off frequency was 200 Hz. The detector output was stable to within ± 0.05 % over a 15 min period. These results demonstrate that the bolometer detector system will serve as an excellent detector for the high accuracy infrared spectrophotometer. PMID:28009364
Madarang, Krish J; Kang, Joo-Hyon
2014-06-01
Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rao, M.; Vuong, H.
2013-12-01
The overall objective of this study is to develop a method for estimating total aboveground biomass of redwood stands in Jackson Demonstration State Forest, Mendocino, California using airborne LiDAR data. LiDAR data owing to its vertical and horizontal accuracy are increasingly being used to characterize landscape features including ground surface elevation and canopy height. These LiDAR-derived metrics involving structural signatures at higher precision and accuracy can help better understand ecological processes at various spatial scales. Our study is focused on two major species of the forest: redwood (Sequoia semperirens [D.Don] Engl.) and Douglas-fir (Pseudotsuga mensiezii [Mirb.] Franco). Specifically, the objectives included linear regression models fitting tree diameter at breast height (dbh) to LiDAR derived height for each species. From 23 random points on the study area, field measurement (dbh and tree coordinate) were collected for more than 500 trees of Redwood and Douglas-fir over 0.2 ha- plots. The USFS-FUSION application software along with its LiDAR Data Viewer (LDV) were used to to extract Canopy Height Model (CHM) from which tree heights would be derived. Based on the LiDAR derived height and ground based dbh, a linear regression model was developed to predict dbh. The predicted dbh was used to estimate the biomass at the single tree level using Jenkin's formula (Jenkin et al 2003). The linear regression models were able to explain 65% of the variability associated with Redwood's dbh and 80% of that associated with Douglas-fir's dbh.
In-vivo detectability index: development and validation of an automated methodology
NASA Astrophysics Data System (ADS)
Smith, Taylor Brunton; Solomon, Justin; Samei, Ehsan
2017-03-01
The purpose of this study was to develop and validate a method to estimate patient-specific detectability indices directly from patients' CT images (i.e., "in vivo"). The method works by automatically extracting noise (NPS) and resolution (MTF) properties from each patient's CT series based on previously validated techniques. Patient images are thresholded into skin-air interfaces to form edge-spread functions, which are further binned, differentiated, and Fourier transformed to form the MTF. The NPS is likewise estimated from uniform areas of the image. These are combined with assumed task functions (reference function: 10 mm disk lesion with contrast of -15 HU) to compute detectability indices for a non-prewhitening matched filter model observer predicting observer performance. The results were compared to those from a previous human detection study on 105 subtle, hypo-attenuating liver lesions, using a two-alternative-forcedchoice (2AFC) method, over 6 dose levels using 16 readers. The in vivo detectability indices estimated for all patient images were compared to binary 2AFC outcomes with a generalized linear mixed-effects statistical model (Probit link function, linear terms only, no interactions, random term for readers). The model showed that the in vivo detectability indices were strongly predictive of 2AFC outcomes (P < 0.05). A linear comparison between the human detection accuracy and model-predicted detection accuracy (for like conditions) resulted in Pearson and Spearman correlations coefficients of 0.86 and 0.87, respectively. These data provide evidence that the in vivo detectability index could potentially be used to automatically estimate and track image quality in a clinical operation.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
NASA Astrophysics Data System (ADS)
Daran-Daneau, Cyril
In order to answer the energetic needs of the future, insulation, which is the central piece of high voltage equipment, has to be reinvented. Nanodielectrics seem to be the promise of a mayor technological breakthrough. Based on nanocomposites with a linear low density polyethylene matrix reinforced by nano-clays and manufactured from a commercial master batch, the present thesis aims to characterise the accuracy of measurement techniques applied on nanodielectrics and also the dielectric properties of these materials. Thus, dielectric spectroscopy accuracy both in frequency and time domain is analysed with a specific emphasis on the impact of gold sputtering of the samples and on the measurements transposition from time domain to frequency domain. Also, when measuring dielectric strength, the significant role of surrounding medium and sample thickness on the variation of the alpha scale factor is shown and analysed in relation with the presence of surface partial discharges. Taking into account these limits and for different nanoparticles composition, complex permittivity as a function of frequency, linearity and conductivity as a function of applied electric field is studied with respect to the role that seems to play nanometrics interfaces. Similarly, dielectric strength variation as a function of nano-clays content is investigated with respect to the partial discharge resistance improvement that seems be induced by nanoparticle addition. Finally, an opening towards nanostructuration of underground cables' insulation is proposed considering on one hand the dielectric characterisation of polyethylene matrix reinforced by nano-clays or nano-silica nanodielectrics and on the other hand a succinct cost analysis. Keywords: nanodielectric, linear low density polyethylene, nanoclays, dielectric spectroscopy, dielectric breakdown
Characterization of atherosclerotic plaques by cross-polarization optical coherence tomography
NASA Astrophysics Data System (ADS)
Gubarkova, Ekaterina V.; Dudenkova, Varvara V.; Feldchtein, Felix I.; Timofeeva, Lidia B.; Kiseleva, Elena B.; Kuznetsov, Sergei S.; Moiseev, Alexander A.; Gelikonov, Gregory V.; Vitkin, Alex I.; Gladkova, Natalia D.
2016-02-01
We combined cross-polarization optical coherence tomography (CP OCT) and non-linear microscopy based on second harmonic generation (SHG) and two-photon-excited fluorescence (2PEF) to assess collagen and elastin fibers in the development of the atherosclerotic plaque (AP). The study shows potential of CP OCT for the assessment of collagen and elastin fibers condition in atherosclerotic arteries. Specifically, the additional information afforded by CP OCT, related to birefringence and cross-scattering properties of arterial tissues, may improve the robustness and accuracy of assessment about the microstructure and composition of the plaque for different stages of atherosclerosis.
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
Stability and complexity of small random linear systems
NASA Astrophysics Data System (ADS)
Hastings, Harold
2010-03-01
We explore the stability of the small random linear systems, typically involving 10-20 variables, motivated by dynamics of the world trade network and the US and Canadian power grid. This report was prepared as an account of work sponsored by an agency of the US Government. Neither the US Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the US Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the US Government or any agency thereof.
NASA Astrophysics Data System (ADS)
Ivanova, B. B.; Simeonov, V. D.; Arnaudov, M. G.; Tsalev, D. L.
2007-05-01
A validation of the developed new orientation method of solid samples as suspension in nematic liquid crystal (NLC), applied in linear-dichroic infrared (IR-LD) spectroscopy has been carried out using a model system DL-isoleucine ( DL-isoleu). Accuracy, precision and the influence of the liquid crystal medium on peak positions and integral absorbances of guest molecules have been presented. Optimization of experimental conditions has been performed as well. An experimental design for quantitative evaluation of the impact of four input factors: the number of scans, the rubbing-out of KBr-pellets, the amount of studied compounds included in the liquid crystal medium and the ratios of Lorentzian to Gaussian peak functions in the curve fitting procedure on the spectroscopic signal at five different frequencies, indicating important specifities of the system has been studied.
Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.
Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon
2015-05-01
The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.
NASA Astrophysics Data System (ADS)
Carroll, Lewis
2014-02-01
We are developing a new dose calibrator for nuclear pharmacies that can measure radioactivity in a vial or syringe without handling it directly or removing it from its transport shield “pig”. The calibrator's detector comprises twin opposing scintillating crystals coupled to Si photodiodes and current-amplifying trans-resistance amplifiers. Such a scheme is inherently linear with respect to dose rate over a wide range of radiation intensities, but accuracy at low activity levels may be impaired, beyond the effects of meager photon statistics, by baseline fluctuation and drift inevitably present in high-gain, current-mode photodiode amplifiers. The work described here is motivated by our desire to enhance accuracy at low excitations while maintaining linearity at high excitations. Thus, we are also evaluating a novel “pulse-mode” analog signal processing scheme that employs a linear threshold discriminator to virtually eliminate baseline fluctuation and drift. We will show the results of a side-by-side comparison of current-mode versus pulse-mode signal processing schemes, including perturbing factors affecting linearity and accuracy at very low and very high excitations. Bench testing over a wide range of excitations is done using a Poisson random pulse generator plus an LED light source to simulate excitations up to ˜106 detected counts per second without the need to handle and store large amounts of radioactive material.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Kumar, Puspendra; Jha, Shivesh; Naved, Tanveer
2013-01-01
Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of diagnostic characters of each ingredient of Shatavaryadi churna individually was carried out. Microscopic determination, counting of identifying number, measurement of area, length and breadth of identifying characters were performed using Leica DMLS-2 microscope. The method was validated for intraday precision, linearity, specificity, repeatability, accuracy and system suitability, respectively. The method is simple, precise, sensitive, and accurate, and can be used for routine standardisation of raw materials of herbal drugs. This method gives the ratio of individual ingredients in the powdered drug so that any adulteration of genuine drug with its adulterant can be found out. The method shows very good linearity value between 0.988-0.999 for number of identifying character and area of identifying character. Percentage purity of the sample drug can be determined by using the linear equation of standard genuine drug.
Isobaric Reconstruction of the Baryonic Acoustic Oscillation
NASA Astrophysics Data System (ADS)
Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li
2017-06-01
In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
Luciferase-Zinc-Finger System for the Rapid Detection of Pathogenic Bacteria.
Shi, Chu; Xu, Qing; Ge, Yue; Jiang, Ling; Huang, He
2017-08-09
Rapid and reliable detection of pathogenic bacteria is crucial for food safety control. Here, we present a novel luciferase-zinc finger system for the detection of pathogens that offers rapid and specific profiling. The system, which uses a zinc-finger protein domain to probe zinc finger recognition sites, was designed to bind the amplified conserved regions of 16S rDNA, and the obtained products were detected using a modified luciferase. The luciferase-zinc finger system not only maintained luciferase activity but also allowed the specific detection of different bacterial species, with a sensitivity as low as 10 copies and a linear range from 10 to 10 4 copies per microliter of the specific PCR product. Moreover, the system is robust and rapid, enabling the simultaneous detection of 6 species of bacteria in artificially contaminated samples with excellent accuracy. Thus, we envision that our luciferase-zinc finger system will have far-reaching applications.
A Fully Associative, Non-Linear Kinematic, Unified Viscoplastic Model for Titanium Based Matrices
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Castelli, M. G.
1994-01-01
Specific forms for both the Gibb's and complementary dissipation potentials are chosen such that a complete (i.e., fully associative) potential based multiaxial unified viscoplastic model is obtained. This model possesses one tensorial internal state variable that is associated with dislocation substructure, with an evolutionary law that has nonlinear kinematic hardening and both thermal and strain induced recovery mechanisms. A unique aspect of the present model is the inclusion of non-linear hardening through the use of a compliance operator, derived from the Gibb's potential, in the evolution law for the back stress. This non-linear tensorial operator is significant in that it allows both the flow and evolutionary laws to be fully associative (and therefore easily integrated) and greatly influences the multiaxial response under non-proportional loading paths. In addition to this nonlinear compliance operator, a new consistent, potential preserving, internal strain unloading criterion has been introduced to prevent abnormalities in the predicted stress-strain curves, which are present with nonlinear hardening formulations, during unloading and reversed loading of the external variables. Specification of an experimental program for the complete determination of the material functions and parameters for characterizing a metallic matrix, e.g., TIMETAL 21S, is given. The experiments utilized are tensile, creep, and step creep tests. Finally, a comparison of this model and a commonly used Bodner-Partom model is made on the basis of predictive accuracy and numerical efficiency.
Accuracy of a reformulated fast-set vinyl polysiloxane impression material using dual-arch trays.
Kang, Alex H; Johnson, Glen H; Lepe, Xavier; Wataha, John C
2009-05-01
A common technique used for making crown impressions involves use of a vinyl polysiloxane impression material in combination with a dual-arch tray. A leading dental manufacturer has reformulated its vinyl polysiloxane (VPS) impression line, but the accuracy of the new material has not been verified. The purpose of this study was to assess the accuracy of reformulated VPS impression materials using the single-step dual-arch impression technique. Dual-arch impressions were made on a typodont containing a master stainless steel standard crown preparation die, from which gypsum working dies were formed, recovered, and measured. The impression materials evaluated were Imprint 3 Penta Putty with Quick Step Regular Body (IP-0); Imprint 3 Penta Quick Step Heavy Body with Quick Step Light Body (IP-1); Aquasil Ultra Rigid Fast Set with LV Fast Set (AQ-1); and Aquasil Ultra Heavy Fast Set with XLV Fast Set (AQ-2) (n=10). All impressions were disinfected with CaviCide spray for 10 minutes prior to pouring with type IV gypsum. Buccolingual (BL), mesiodistal (MD), and occlusogingival (OG) dimensions were measured and compared to the master die using an optical measuring microscope. Linear dimensional change was also assessed for IP-0 and AQ-1 at 1 and 24 hours based on ANSI/ADA Specification No. 19. Single-factor ANOVA with Dunnett's T3 multiple comparisons was used to compare BL, MD, and OG changes, with hypothesis testing at alpha=.05. A repeated-measures ANOVA was used to compare linear dimensional changes. There were statistical differences among the 4 impression systems for 3 of 4 dimensions of the master die. IP-0 working dies were significantly larger in MD and OG-L dimensions but significantly smaller in the BL dimension. IP-1 working dies were significantly smaller in the BL dimension compared to the master die. With the exception of IP-0, differences detected were small and clinically insignificant. No significant differences were observed for linear dimensional change. The single-step dual-arch impression technique produced working dies that were smaller in 3 of the 4 dimensions measured and may require additional die relief to achieve appropriate fit of cast restorations. Overall accuracy was acceptable for all impression groups with the exception of IP-0.
Land cover mapping in Latvia using hyperspectral airborne and simulated Sentinel-2 data
NASA Astrophysics Data System (ADS)
Jakovels, Dainis; Filipovs, Jevgenijs; Brauns, Agris; Taskovs, Juris; Erins, Gatis
2016-08-01
Land cover mapping in Latvia is performed as part of the Corine Land Cover (CLC) initiative every six years. The advantage of CLC is the creation of a standardized nomenclature and mapping protocol comparable across all European countries, thereby making it a valuable information source at the European level. However, low spatial resolution and accuracy, infrequent updates and expensive manual production has limited its use at the national level. As of now, there is no remote sensing based high resolution land cover and land use services designed specifically for Latvia which would account for the country's natural and land use specifics and end-user interests. The European Space Agency launched the Sentinel-2 satellite in 2015 aiming to provide continuity of free high resolution multispectral satellite data thereby presenting an opportunity to develop and adapted land cover and land use algorithm which accounts for national enduser needs. In this study, land cover mapping scheme according to national end-user needs was developed and tested in two pilot territories (Cesis and Burtnieki). Hyperspectral airborne data covering spectral range 400-2500 nm was acquired in summer 2015 using Airborne Surveillance and Environmental Monitoring System (ARSENAL). The gathered data was tested for land cover classification of seven general classes (urban/artificial, bare, forest, shrubland, agricultural/grassland, wetlands, water) and sub-classes specific for Latvia as well as simulation of Sentinel-2 satellite data. Hyperspectral data sets consist of 122 spectral bands in visible to near infrared spectral range (356-950 nm) and 100 bands in short wave infrared (950-2500 nm). Classification of land cover was tested separately for each sensor data and fused cross-sensor data. The best overall classification accuracy 84.2% and satisfactory classification accuracy (more than 80%) for 9 of 13 classes was obtained using Support Vector Machine (SVM) classifier with 109 band hyperspectral data. Grassland and agriculture land demonstrated lowest classification accuracy in pixel based approach, but result significantly improved by looking at agriculture polygons registered in Rural Support Service data as objects. The test of simulated Sentinel-2 bands for land cover mapping using SVM classifier showed 82.8% overall accuracy and satisfactory separation of 7 classes. SVM provided highest overall accuracy 84.2% in comparison to 75.9% for k-Nearest Neighbor and 79.2% Linear Discriminant Analysis classifiers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D
2015-06-15
Purpose: AAPM radiation therapy committee task group No. 66 (TG-66) published a report which described a general approach to CT simulator QA. The report outlines the testing procedures and specifications for the evaluation of patient dose, radiation safety, electromechanical components, and image quality for a CT simulator. The purpose of this study is to thoroughly evaluate the performance of a second generation Toshiba Aquilion Large Bore CT simulator with 90 cm bore size (Toshiba, Nasu, JP) based on the TG-66 criteria. The testing procedures and results from this study provide baselines for a routine QA program. Methods: Different measurements andmore » analysis were performed including CTDIvol measurements, alignment and orientation of gantry lasers, orientation of the tabletop with respect to the imaging plane, table movement and indexing accuracy, Scanogram location accuracy, high contrast spatial resolution, low contrast resolution, field uniformity, CT number accuracy, mA linearity and mA reproducibility using a number of different phantoms and measuring devices, such as CTDI phantom, ACR image quality phantom, TG-66 laser QA phantom, pencil ion chamber (Fluke Victoreen) and electrometer (RTI Solidose 400). Results: The CTDI measurements were within 20% of the console displayed values. The alignment and orientation for both gantry laser and tabletop, as well as the table movement and indexing and scanogram location accuracy were within 2mm as specified in TG66. The spatial resolution, low contrast resolution, field uniformity and CT number accuracy were all within ACR’s recommended limits. The mA linearity and reproducibility were both well below the TG66 threshold. Conclusion: The 90 cm bore size second generation Toshiba Aquilion Large Bore CT simulator that comes with 70 cm true FOV can consistently meet various clinical needs. The results demonstrated that this simulator complies with the TG-66 protocol in all aspects including electromechanical component, radiation safety component, and image quality component. Employee of Toshiba America Medical Systems.« less
Cutting force measurement of electrical jigsaw by strain gauges
NASA Astrophysics Data System (ADS)
Kazup, L.; Varadine Szarka, A.
2016-11-01
This paper describes a measuring method based on strain gauges for accurate specification of electric jigsaw's cutting force. The goal of the measurement is to provide an overall perspective about generated forces in a jigsaw's gearbox during a cutting period. The lifetime of the tool is affected by these forces primarily. This analysis is part of the research and development project aiming to develop a special linear magnetic brake for realizing automatic lifetime tests of electric jigsaws or similar handheld tools. The accurate specification of cutting force facilitates to define realistic test cycles during the automatic lifetime test. The accuracy and precision resulted by the well described cutting force characteristic and the possibility of automation provide new dimension for lifetime testing of the handheld tools with alternating movement.
Gibelli, Daniele; Poppa, Pasquale; Cummaudo, Marco; Mattia, Mirko; Cappella, Annalisa; Mazzarelli, Debora; Zago, Matteo; Sforza, Chiarella; Cattaneo, Cristina
2017-11-01
Sexual dimorphism is a crucial characteristic of skeleton. In the last years, volumetric and surface 3D acquisition systems have enabled anthropologists to assess surfaces and volumes, whose potential still needs to be verified. This article aimed at assessing volume and linear parameters of the first metatarsal bone through 3D acquisition by laser scanning. Sixty-eight skeletons underwent 3D scan through laser scanner: Seven linear measurements and volume from each bone were assessed. A cutoff value of 13,370 mm 3 was found, with an accuracy of 80.8%. Linear measurements outperformed volume: metatarsal length and mediolateral width of base showed higher cross-validated accuracies (respectively, 82.1% and 79.1%, raising at 83.6% when both of them were included). Further studies are needed to verify the real advantage for sex assessment provided by volume measurements. © 2017 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin
1997-06-01
A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.
Kumar, Namala Durga Atchuta; Babu, K. Sudhakar; Gosada, Ullas; Sharma, Nitish
2012-01-01
Introduction: A selective, specific, and sensitive “Ultra High-Pressure Liquid Chromatography” (UPLC) method was developed for determination of candesartan cilexetil impurities as well asits degradent in tablet formulation. Materials and Methods: The chromatographic separation was performed on Waters Acquity UPLC system and BEH Shield RP18 column using gradient elution of mobile phase A and B. 0.01 M phosphate buffer adjusted pH 3.0 with Orthophosphoric acid was used as mobile phase A and 95% acetonitrile with 5% Milli Q Water was used as mobile phase B. Ultraviolet (UV) detection was performed at 254 nm and 210 nm, where (CDS-6), (CDS-5), (CDS-7), (Ethyl Candesartan), (Desethyl CCX), (N-Ethyl), (CCX-1), (1 N Ethyl Oxo CCX), (2 N Ethyl Oxo CCX), (2 N Ethyl) and any unknown impurity were monitored at 254 nm wavelength, and two process-related impurities, trityl alcohol and MTE impurity, were estimated at 210 nm. Candesartan cilexetil andimpurities were chromatographed with a total run time of 20 min. Results: Calibration showed that the response of impurity was a linear function of concentration over the range limit of quantification to 2 μg/mL (r2≥0.999) and the method was validated over this range for precision, intermediate precision, accuracy, linearity, and specificity. For the precision study, percentage relative standard deviation of each impurity was <15% (n=6). Conclusion: The method was found to be precise, accurate, linear, and specific. The proposed method was successfully employed for estimation of candesartan cilexetil impurities in pharmaceutical preparations. PMID:23781475
Quality control methods for linear accelerator radiation and mechanical axes alignment.
Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A
2018-06-01
The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis and the mechanical collimator rotation axis from the impact of field size asymmetry. The test suite can be performed in a reasonable time (30-35 min) due to simple phantom setup, prescription-based beam delivery, and automated image analysis. As well, it provides a clear description of the relationship between axes. After testing the sensitivity of the test suite to beam steering and mechanical errors, the results of the test suite were used to reduce the misalignment errors of the linac to less than 0.7-mm radius for all axes. The proposed test suite offers sub-millimeter assessment of the coincidence of the radiation and mechanical isocenters and the test automation reduces complexity with improved efficiency. The test suite results can be used to optimize the linear accelerator's radiation to mechanical isocenter alignment by beam steering and mechanical adjustment of gantry and couch. © 2018 American Association of Physicists in Medicine.
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Takahashi, Kunio; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Kaneko, Shuji; Nakamura, Akira; Itasaka, Satoshi; Matsuo, Yukinori; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro
2016-12-01
The purposes of this study were two-fold: first, to develop a four-axis moving phantom for patient-specific quality assurance (QA) in surrogate signal-based dynamic tumor-tracking intensity-modulated radiotherapy (DTT-IMRT), and second, to evaluate the accuracy of the moving phantom and perform patient-specific dosimetric QA of the surrogate signal-based DTT-IMRT. The four-axis moving phantom comprised three orthogonal linear actuators for target motion and a fourth one for surrogate motion. The positional accuracy was verified using four laser displacement gauges under static conditions (±40 mm displacements along each axis) and moving conditions [eight regular sinusoidal and fourth-power-of-sinusoidal patterns with peak-to-peak motion ranges (H) of 10-80 mm and a breathing period (T) of 4 s, and three irregular respiratory patterns with H of 1.4-2.5 mm in the left-right, 7.7-11.6 mm in the superior-inferior, and 3.1-4.2 mm in the anterior-posterior directions for the target motion, and 4.8-14.5 mm in the anterior-posterior direction for the surrogate motion, and T of 3.9-4.9 s]. Furthermore, perpendicularity, defined as the vector angle between any two axes, was measured using an optical measurement system. The reproducibility of the uncertainties in DTT-IMRT was then evaluated. Respiratory motions from 20 patients acquired in advance were reproduced and compared three-dimensionally with the originals. Furthermore, patient-specific dosimetric QAs of DTT-IMRT were performed for ten pancreatic cancer patients. The doses delivered to Gafchromic films under tracking and moving conditions were compared with those delivered under static conditions without dose normalization. Positional errors of the moving phantom under static and moving conditions were within 0.05 mm. The perpendicularity of the moving phantom was within 0.2° of 90°. The differences in prediction errors between the original and reproduced respiratory motions were -0.1 ± 0.1 mm for the lateral direction, -0.1 ± 0.2 mm for the superior-inferior direction, and -0.1 ± 0.1 mm for the anterior-posterior direction. The dosimetric accuracy showed significant improvements, of 92.9% ± 4.0% with tracking versus 69.8% ± 7.4% without tracking, in the passing rates of γ with the criterion of 3%/1 mm (p < 0.001). Although the dosimetric accuracy of IMRT without tracking showed a significant negative correlation with the 3D motion range of the target (r = - 0.59, p < 0.05), there was no significant correlation for DTT-IMRT (r = 0.03, p = 0.464). The developed four-axis moving phantom had sufficient accuracy to reproduce patient respiratory motions, allowing patient-specific QA of the surrogate signal-based DTT-IMRT under realistic conditions. Although IMRT without tracking decreased the dosimetric accuracy as the target motion increased, the DTT-IMRT achieved high dosimetric accuracy.
Matsunami, Risë K; Angelides, Kimon; Engler, David A
2015-05-18
There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using (13)C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and (13)C6-glucose. The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. © 2015 Diabetes Technology Society.
Matsunami, Risë K.; Angelides, Kimon; Engler, David A.
2015-01-01
Background: There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. Methods: An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using 13C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and 13C6-glucose. Results: The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. Conclusions: The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. PMID:25986627
Diagnostic Accuracy of a Self-Report Measure of Patellar Tendinopathy in Youth Basketball.
Owoeye, Oluwatoyosi B A; Wiley, J Preston; Walker, Richard E A; Palacios-Derflingher, Luz; Emery, Carolyn A
2018-04-27
Study Design Prospective diagnostic accuracy validation study. Background Engaging clinicians for diagnosis of patellar tendinopathy in large surveillance studies is often impracticable. A self-report measure, the Oslo Sports Research Trauma Centre patellar tendinopathy (OSTRC-P) Questionnaire, an adaptation of the OSTRC Questionnaire may provide a viable alternative. Objectives To evaluate the diagnostic accuracy of the OSTRC-P Questionnaire in detecting patellar tendinopathy in youth basketball players when compared to clinical evaluation. Methods Following the Standards for Reporting of Diagnostic Accuracy Studies guidelines, 208 youth basketball players (aged 13-18 years) were recruited. Participants completed the OSTRC-P Questionnaire (index test) prior to a clinical evaluation (reference standard) by a physiotherapist blinded to OSTRC-P Questionnaire results. Sensitivity, specificity, predictive values (PVs), likelihood ratios (LRs) and posttest probabilities were calculated. Linear regression was used to examine the association between OSTRC-P Questionnaire severity score and patellar tendinopathy severity rating during single leg decline squat (SLDS). Results The final analysis included 169 players. The OSTRC-P Questionnaire had a sensitivity of 79% (95%CI: 65%, 90%), specificity of 98% (95%CI: 94%, 100%), positive PV of 95%, negative PV of 92%, positive LR of 48 and negative LR of 0.21. The posttest probabilities were 95% and 8% given positive and negative results, respectively. A positive association was found between OSTRC-P Questionnaire and SLDS rating [(β = .08 (95%CI: .03, .12) (p = .001)]. Conclusions The OSTRC-P Questionnaire is an acceptable alternative to clinical evaluation for self-reporting patellar tendinopathy and grading its severity in settings involving youth basketball players. Level of Evidence Diagnosis, level 1b. J Orthop Sports Phys Ther, Epub 27 Apr 2018. doi:10.2519/jospt.2018.8088.
Poster — Thur Eve — 19: Performance assessment of a 160-leaf beam collimation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, E. S. M.; La Russa, D. J.; Vandervoort, E.
2014-08-15
In this study, the performance of the new beam collimation system with 160 leaves, each with a 5 mm leaf width projected at isocenter, is evaluated in terms of positional accuracy and plan/delivery quality. Positional accuracy was evaluated using a set of static and dynamic MLC/jaw delivery patterns at different gantry angles, dose rates, and MLC/jaw speeds. The impact on IMRT plan quality was assessed by comparing against a previous generation collimation system using the same optimization parameters, while delivery quality was quantified using a combination of patient-specific QA measurements with ion chambers, film, and a bi-planar diode array. Positionalmore » accuracy for four separate units was comparable. The field size accuracy, junction width, and total displacement over 16 cm leaf travel are 0.3 ± 0.2 mm, 0.4 ± 0.3 mm, and 0.5 ± 0.2 mm, respectively. The typical leaf minor offset is 0.05 ± 0.04 mm, and MLC hysteresis effects are 0.2 ± 0.1 mm over 16 cm travel. The dynamic output is linear with MU and MLC/jaw speed, and is within 0.7 ± 0.3 % of the planning system value. Plan quality is significantly improved both in terms of target coverage and OAR sparing due, in part, to the larger allowable MLC and jaw speeds. γ-index pass rates for the patient-specific QA measurements exceeded 97% using criteria of 2%/2 mm. In conclusion, the performance of the Agility system is consistent among four separate installations, and is superior to its previous generations of collimation systems.« less
Settling characteristics of nursery pig manure and nutrient estimation by the hydrometer method.
Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian
2003-05-01
The hydrometer method to measure manure specific gravity and subsequently relate it to manure nutrient contents was examined in this study. It was found that this method might be improved in estimation accuracy if only manure from a single growth stage of pigs was used (e.g., nursery pig manure used here). The total solids (TS) content of the test manure was well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.9944 and 0.9873, respectively. Also observed were good linear correlations between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.9836 and 0.9843, respectively). These correlations were much better than those reported by past researchers, in which lumped data for pigs at different growing stages were used. It may therefore be inferred that developing different linear equations for pigs at different ages should improve the accuracy in manure nutrient estimation using a hydrometer. Also, the error of using the hydrometer method to estimate manure TN and TP was found to increase, from +/- 10% to +/- 50%, with the decrease in TN (from 700 ppm to 100 ppm) and TP (from 130 ppm to 30 ppm) concentrations in the manure. The estimation errors for TN and TP may be larger than 50% if the total solids content is below 0.5%. In addition, the rapid settling of solids has long been considered characteristic of swine manure; however, in this study, the solids settling property appeared to be quite poor for nursery pig manure in that no conspicuous settling occurred after the manure was left statically for 5 hours. This information has not been reported elsewhere in the literature and may need further research to verify.
Design and performance study of an orthopaedic surgery robotized module for automatic bone drilling.
Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Kotev, Vladimir; Delchev, Kamen; Zagurski, Kazimir; Vitkov, Vladimir
2013-12-01
Many orthopaedic operations involve drilling and tapping before the insertion of screws into a bone. This drilling is usually performed manually, thus introducing many problems. These include attaining a specific drilling accuracy, preventing blood vessels from breaking, and minimizing drill oscillations that would widen the hole. Bone overheating is the most important problem. To avoid such problems and reduce the subjective factor, automated drilling is recommended. Because numerous parameters influence the drilling process, this study examined some experimental methods. These concerned the experimental identification of technical drilling parameters, including the bone resistance force and temperature in the drilling process. During the drilling process, the following parameters were monitored: time, linear velocity, angular velocity, resistance force, penetration depth, and temperature. Specific drilling effects were revealed during the experiments. The accuracy was improved at the starting point of the drilling, and the error for the entire process was less than 0.2 mm. The temperature deviations were kept within tolerable limits. The results of various experiments with different drilling velocities, drill bit diameters, and penetration depths are presented in tables, as well as the curves of the resistance force and temperature with respect to time. Real-time digital indications of the progress of the drilling process are shown. Automatic bone drilling could entirely solve the problems that usually arise during manual drilling. An experimental setup was designed to identify bone drilling parameters such as the resistance force arising from variable bone density, appropriate mechanical drilling torque, linear speed of the drill, and electromechanical characteristics of the motors, drives, and corresponding controllers. Automatic drilling guarantees greater safety for the patient. Moreover, the robot presented is user-friendly because it is simple to set robot tasks, and process data are collected in real time. Copyright © 2013 John Wiley & Sons, Ltd.
Stretch, Jonathan R; Somorjai, Ray; Bourne, Roger; Hsiao, Edward; Scolyer, Richard A; Dolenko, Brion; Thompson, John F; Mountford, Carolyn E; Lean, Cynthia L
2005-11-01
Nonsurgical assessment of sentinel nodes (SNs) would offer advantages over surgical SN excision by reducing morbidity and costs. Proton magnetic resonance spectroscopy (MRS) of fine-needle aspirate biopsy (FNAB) specimens identifies melanoma lymph node metastases. This study was undertaken to determine the accuracy of the MRS method and thereby establish a basis for the future development of a nonsurgical technique for assessing SNs. FNAB samples were obtained from 118 biopsy specimens from 77 patients during SN biopsy and regional lymphadenectomy. The specimens were histologically evaluated and correlated with MRS data. Histopathologic analysis established that 56 specimens contained metastatic melanoma and that 62 specimens were benign. A linear discriminant analysis-based classifier was developed for benign tissues and metastases. The presence of metastatic melanoma in lymph nodes was predicted with a sensitivity of 92.9%, a specificity of 90.3%, and an accuracy of 91.5% in a primary data set. In a second data set that used FNAB samples separate from the original tissue samples, melanoma metastases were predicted with a sensitivity of 87.5%, a specificity of 90.3%, and an accuracy of 89.1%, thus supporting the reproducibility of the method. Proton MRS of FNAB samples may provide a robust and accurate diagnosis of metastatic disease in the regional lymph nodes of melanoma patients. These data indicate the potential for SN staging of melanoma without surgical biopsy and histopathological evaluation.
Osteoporosis prediction from the mandible using cone-beam computed tomography
Al Haffar, Iyad; Khattab, Razan
2014-01-01
Purpose This study aimed to evaluate the use of dental cone-beam computed tomography (CBCT) in the diagnosis of osteoporosis among menopausal and postmenopausal women by using only a CBCT viewer program. Materials and Methods Thirty-eight menopausal and postmenopausal women who underwent dual-energy X-ray absorptiometry (DXA) examination for hip and lumbar vertebrae were scanned using CBCT (field of view: 13 cm×15 cm; voxel size: 0.25 mm). Slices from the body of the mandible as well as the ramus were selected and some CBCT-derived variables, such as radiographic density (RD) as gray values, were calculated as gray values. Pearson's correlation, one-way analysis of variance (ANOVA), and accuracy (sensitivity and specificity) evaluation based on linear and logistic regression were performed to choose the variable that best correlated with the lumbar and femoral neck T-scores. Results RD of the whole bone area of the mandible was the variable that best correlated with and predicted both the femoral neck and the lumbar vertebrae T-scores; further, Pearson's correlation coefficients were 0.5/0.6 (p value=0.037/0.009). The sensitivity, specificity, and accuracy based on the logistic regression were 50%, 88.9%, and 78.4%, respectively, for the femoral neck, and 46.2%, 91.3%, and 75%, respectively, for the lumbar vertebrae. Conclusion Lumbar vertebrae and femoral neck osteoporosis can be predicted with high accuracy from the RD value of the body of the mandible by using a CBCT viewer program. PMID:25473633
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708
We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less
Reverse phase HPLC method for detection and quantification of lupin seed γ-conglutin.
Mane, Sharmilee; Bringans, Scott; Johnson, Stuart; Pareek, Vishnu; Utikar, Ranjeet
2017-09-15
A simple, selective and accurate reverse phase HPLC method was developed for detection and quantitation of γ-conglutin from lupin seed extract. A linear gradient of water and acetonitrile containing trifluoroacetic acid (TFA) on a reverse phase column (Agilent Zorbax 300SB C-18), with a flow rate of 0.8ml/min was able to produce a sharp and symmetric peak of γ-conglutin with a retention time at 29.16min. The identity of γ-conglutin in the peak was confirmed by mass spectrometry (MS/MS identification) and sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis. The data obtained from MS/MS analysis was matched against the specified database to obtain the exact match for the protein of interest. The proposed method was validated in terms of specificity, linearity, sensitivity, precision, recovery and accuracy. The analytical parameters revealed that the validated method was capable of selectively performing a good chromatographic separation of γ-conglutin from the lupin seed extract with no interference of the matrix. The detection and quantitation limit of γ-conglutin were found to be 2.68μg/ml and 8.12μg/ml respectively. The accuracy (precision and recovery) analysis of the method was conducted under repeatable conditions on different days. Intra-day and inter-day precision values less than 0.5% and recovery greater than 97% indicated high precision and accuracy of the method for analysis of γ-conglutin. The method validation findings were reproducible and can be successfully applied for routine analysis of γ-conglutin from lupin seed extract. Copyright © 2017 Elsevier B.V. All rights reserved.
Vavougios, George D; Doskas, Triantafyllos; Konstantopoulos, Kostas
2018-05-01
Dysarthrophonia is a predominant symptom in many neurological diseases, affecting the quality of life of the patients. In this study, we produced a discriminant function equation that can differentiate MS patients from healthy controls, using electroglottographic variables not analyzed in a previous study. We applied stepwise linear discriminant function analysis in order to produce a function and score derived from electroglottographic variables extracted from a previous study. The derived discriminant function's statistical significance was determined via Wilk's λ test (and the associated p value). Finally, a 2 × 2 confusion matrix was used to determine the function's predictive accuracy, whereas the cross-validated predictive accuracy is estimated via the "leave-one-out" classification process. Discriminant function analysis (DFA) was used to create a linear function of continuous predictors. DFA produced the following model (Wilk's λ = 0.043, χ2 = 388.588, p < 0.0001, Tables 3 and 4): D (MS vs controls) = 0.728*DQx1 mean monologue + 0.325*CQx monologue + 0.298*DFx1 90% range monologue + 0.443*DQx1 90% range reading - 1.490*DQx1 90% range monologue. The derived discriminant score (S1) was used subsequently in order to form the coordinates of a ROC curve. Thus, a cutoff score of - 0.788 for S1 corresponded to a perfect classification (100% sensitivity and 100% specificity, p = 1.67e -22 ). Consistent with previous findings, electroglottographic evaluation represents an easy to implement and potentially important assessment in MS patients, achieving adequate classification accuracy. Further evaluation is needed to determine its use as a biomarker.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju
2015-01-01
Background: Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. Materials and Methods: The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer’s instructions. Dimensional changes were measured before and after different disinfection procedures. Result: Dentsply aquasil showed smallest dimensional change (−0.0046%) and impregum penta soft highest linear dimensional changes (−0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. Conclusion: The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method. PMID:26435611
Nishino, K; Hayashi, T; Suzuki, Y; Koga, Y; Omori, G
1999-01-01
The function and integrity of the knee joint following total knee arthroplasty (TKA) is determined at first by the design and implantation of the prosthesis, and later by the tension of soft tissues surrounding it. Accurate post-TKA motion data obtained intraoperatively could be used not only to optimize implantation techniques from a kinematic standpoint, but also to improve prosthetic design. We therefore developed a system specifically geared to photostereometric measurement of 6 d.o.f. knee motion. A total of eight LEDs are mounted on the prosthetic components in two sets of four by means of connecting measuring-bows. The positions of the LEDs are detected in three-dimensions by two sets of three linear CCD cameras, located bilaterally relative to the knee. The position and orientation of the femoral component relative to the tibial one are estimated from the positions of all LEDs in the sense of least-squares. Based upon results of various accuracy validation experiments performed after precise camera calibration, static overall accuracy and spatial resolution were considered to lie within 0.52 and 0.11 mm, respectively, at any point on the femoral articular surface.
Prediction of Drug-Plasma Protein Binding Using Artificial Intelligence Based Algorithms.
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
2018-01-01
Plasma protein binding (PPB) has vital importance in the characterization of drug distribution in the systemic circulation. Unfavorable PPB can pose a negative effect on clinical development of promising drug candidates. The drug distribution properties should be considered at the initial phases of the drug design and development. Therefore, PPB prediction models are receiving an increased attention. In the current study, we present a systematic approach using Support vector machine, Artificial neural network, k- nearest neighbor, Probabilistic neural network, Partial least square and Linear discriminant analysis to relate various in vitro and in silico molecular descriptors to a diverse dataset of 736 drugs/drug-like compounds. The overall accuracy of Support vector machine with Radial basis function kernel came out to be comparatively better than the rest of the applied algorithms. The training set accuracy, validation set accuracy, precision, sensitivity, specificity and F1 score for the Suprort vector machine was found to be 89.73%, 89.97%, 92.56%, 87.26%, 91.97% and 0.898, respectively. This model can potentially be useful in screening of relevant drug candidates at the preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
2017-01-01
Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different. PMID:28255330
Hu, Jianfeng
2017-01-01
Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.
Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas
2018-05-01
This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.
Determination of colonoscopy indication from administrative claims data.
Ko, Cynthia W; Dominitz, Jason A; Neradilek, Moni; Polissar, Nayak; Green, Pam; Kreuter, William; Baldwin, Laura-Mae
2014-04-01
Colonoscopy outcomes, such as polyp detection or complication rates, may differ by procedure indication. To develop methods to classify colonoscopy indications from administrative data, facilitating study of colonoscopy quality and outcomes. We linked 14,844 colonoscopy reports from the Clinical Outcomes Research Initiative, a national repository of endoscopic reports, to the corresponding Medicare Carrier and Outpatient File claims. Colonoscopy indication was determined from the procedure reports. We developed algorithms using classification and regression trees and linear discriminant analysis (LDA) to classify colonoscopy indication. Predictor variables included ICD-9CM and CPT/HCPCS codes present on the colonoscopy claim or in the 12 months prior, patient demographics, and site of colonoscopy service. Algorithms were developed on a training set of 7515 procedures, then validated using a test set of 7329 procedures. Sensitivity was lowest for identifying average-risk screening colonoscopies, varying between 55% and 86% for the different algorithms, but specificity for this indication was consistently over 95%. Sensitivity for diagnostic colonoscopy varied between 77% and 89%, with specificity between 55% and 87%. Algorithms with classification and regression trees with 7 variables or LDA with 10 variables had similar overall accuracy, and generally lower accuracy than the algorithm using LDA with 30 variables. Algorithms using Medicare claims data have moderate sensitivity and specificity for colonoscopy indication, and will be useful for studying colonoscopy quality in this population. Further validation may be needed before use in alternative populations.
Testing the Linearity of the Cosmic Origins Spectrograph FUV Channel Thermal Correction
NASA Astrophysics Data System (ADS)
Fix, Mees B.; De Rosa, Gisella; Sahnow, David
2018-05-01
The Far Ultraviolet Cross Delay Line (FUV XDL) detector on the Cosmic Origins Spectrograph (COS) is subject to temperature-dependent distortions. The correction performed by the COS calibration pipeline (CalCOS) assumes that these changes are linear across the detector. In this report we evaluate the accuracy of the linear approximations using data obtained on orbit. Our results show that the thermal distortions are consistent with our current linear model.
Wang, Ludi; Yang, Wei; Wu, Siyang; Wang, Shuyao; Kang, Chen; Ma, Xiaoli; Li, Yingfei; Li, Chuan
2018-05-01
Isochamaejasmin, neochamaejasmin A and daphnoretin derived from Stellera chamaejasme L. are important because of their reported anticancer properties. In this study, a sensitive UPLC-MS/MS method for the determination of isochamaejasmin, neochamaejasmin A and daphnoretin in rat plasma was developed. The analyte and IS were separated on an Acquity UPLC HSS T3 column (100 × 2.1 mm, 1.8 μm) using gradient elution with the mobile phase of aqueous solution (methanol-water, 1:99, v/v, containing 1 mm formic acid) and organic solution (methanol-water, 99:1, v/v, containing 1 mm formic acid) at a flow rate of 0.3 mL/min. Multiple reaction monitoring mode with negative electrospray ionization interface was carried out to detect the components. The method was validated in terms of specificity, linearity, accuracy, precision, stability, etc. Excellent linear behavior was observed over the certain concentration ranges with the correlation coefficient values >0.99. Intra- and inter-day precisions (RSD) were <6.7% and accuracy (RE) ranged from -7.0 to 12.0%. The validated method was successfully applied to investigate the pharmacokinetics of three chemical ingredients after oral administration of S. chamaejasme L. extract to rats. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ding, Liang; Wang, Shui; Cai, Bingjie; Zhang, Mancheng; Qu, Changsheng
2018-02-01
In this study, portable X-ray fluorescence spectrometry (pXRF) was used to measure the heavy metal contents of As, Cu, Cr, Ni, Pb and Zn in the soils of heavy metal-contaminated sites. The precision, accuracy and system errors of pXRF were evaluated and compared with traditional laboratory methods to examine the suitability of in situ pXRF. The results show that the pXRF analysis achieved satisfactory accuracy and precision in measuring As, Cr, Cu, Ni, Pb, and Zn in soils, and meets the requirements of the relevant detection technology specifications. For the certified reference soil samples, the pXRF results of As, Cr, Cu, Ni, Pb, and Zn show good linear relationships and coefficients of determination with the values measured using the reference analysis methods; with the exception of Ni, all the measured values were within the 95% confidence level. In the soil samples, the coefficients of determination between Cu, Zn, Pb, and Ni concentrations measured laboratory pXRF and the values measured with laboratory analysis all reach 0.9, showing a good linear relationship; however, there were large deviations between methods for Cr and As. This study provides reference data and scientific support for rapid detection of heavy metals in soils using pXRF in site investigation, which can better guide the practical application of pXRF.
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long
2015-05-01
This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Khamanga, Sandile M; Walker, Roderick B
2011-01-15
An accurate, sensitive and specific high performance liquid chromatography-electrochemical detection (HPLC-ECD) method that was developed and validated for captopril (CPT) is presented. Separation was achieved using a Phenomenex(®) Luna 5 μm (C(18)) column and a mobile phase comprised of phosphate buffer (adjusted to pH 3.0): acetonitrile in a ratio of 70:30 (v/v). Detection was accomplished using a full scan multi channel ESA Coulometric detector in the "oxidative-screen" mode with the upstream electrode (E(1)) set at +600 mV and the downstream (analytical) electrode (E(2)) set at +950 mV, while the potential of the guard cell was maintained at +1050 mV. The detector gain was set at 300. Experimental design using central composite design (CCD) was used to facilitate method development. Mobile phase pH, molarity and concentration of acetonitrile (ACN) were considered the critical factors to be studied to establish the retention time of CPT and cyclizine (CYC) that was used as the internal standard. Twenty experiments including centre points were undertaken and a quadratic model was derived for the retention time for CPT using the experimental data. The method was validated for linearity, accuracy, precision, limits of quantitation and detection, as per the ICH guidelines. The system was found to produce sharp and well-resolved peaks for CPT and CYC with retention times of 3.08 and 7.56 min, respectively. Linear regression analysis for the calibration curve showed a good linear relationship with a regression coefficient of 0.978 in the concentration range of 2-70 μg/mL. The linear regression equation was y=0.0131x+0.0275. The limits of detection (LOQ) and quantitation (LOD) were found to be 2.27 and 0.6 μg/mL, respectively. The method was used to analyze CPT in tablets. The wide range for linearity, accuracy, sensitivity, short retention time and composition of the mobile phase indicated that this method is better for the quantification of CPT than the pharmacopoeial methods. Copyright © 2010 Elsevier B.V. All rights reserved.
Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381
Collier, J W; Shah, R B; Bryant, A R; Habib, M J; Khan, M A; Faustino, P J
2011-02-20
A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (L-T(4)) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250 mm × 3.9 mm) using a 0.01 M phosphate buffer (pH 3.0)-methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 μL and the column temperature was maintained at 28°C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r(2)>0.99) over the analytical range of 0.08-0.8 μg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for L-T(4) over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. Published by Elsevier B.V.
Collier, J.W.; Shah, R.B.; Bryant, A.R.; Habib, M.J.; Khan, M.A.; Faustino, P.J.
2011-01-01
A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (l-T4) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250mm × 3.9mm) using a 0.01 M phosphate buffer (pH 3.0)–methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 µL and the column temperature was maintained at 28 °C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r2 > 0.99) over the analytical range of 0.08–0.8 µg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for l-T4 over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. PMID:20947276
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Kabiri, K.
2017-09-01
The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.
Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.
NASA Astrophysics Data System (ADS)
Ushenko, V. O.; Prysyazhnyuk, V. P.; Dubolazov, O. V.; Savich, O. V.; Novakovska, O. Y.; Olar, O. V.
2015-09-01
The model of Mueller-matrix description of mechanisms of optical anisotropy typical for polycrystalline films of bile - optical activity, birefringence, as well as linear and circular dichroism - is suggested. Within the statistical analysis of such distributions the objective criteria of differentiation of films of bile from the dead you people different times were determined. From the point of view of probative medicine the operational characteristics (sensitivity, specificity and accuracy) of the method of Muellermatrix reconstruction of optical anisotropy parameters were found and its efficiency in another task - diagnostics of diseases of internal organs of rats was demonstrated.
A new linear least squares method for T1 estimation from SPGR signals with multiple TRs
NASA Astrophysics Data System (ADS)
Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo
2009-02-01
The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.
Geng, Xiangfei; Xu, Junhai; Liu, Baolin; Shi, Yonggang
2018-01-01
Major depressive disorder (MDD) is a mental disorder characterized by at least 2 weeks of low mood, which is present across most situations. Diagnosis of MDD using rest-state functional magnetic resonance imaging (fMRI) data faces many challenges due to the high dimensionality, small samples, noisy and individual variability. To our best knowledge, no studies aim at classification with effective connectivity and functional connectivity measures between MDD patients and healthy controls. In this study, we performed a data-driving classification analysis using the whole brain connectivity measures which included the functional connectivity from two brain templates and effective connectivity measures created by the default mode network (DMN), dorsal attention network (DAN), frontal-parietal network (FPN), and silence network (SN). Effective connectivity measures were extracted using spectral Dynamic Causal Modeling (spDCM) and transformed into a vectorial feature space. Linear Support Vector Machine (linear SVM), non-linear SVM, k-Nearest Neighbor (KNN), and Logistic Regression (LR) were used as the classifiers to identify the differences between MDD patients and healthy controls. Our results showed that the highest accuracy achieved 91.67% (p < 0.0001) when using 19 effective connections and 89.36% when using 6,650 functional connections. The functional connections with high discriminative power were mainly located within or across the whole brain resting-state networks while the discriminative effective connections located in several specific regions, such as posterior cingulate cortex (PCC), ventromedial prefrontal cortex (vmPFC), dorsal cingulate cortex (dACC), and inferior parietal lobes (IPL). To further compare the discriminative power of functional connections and effective connections, a classification analysis only using the functional connections from those four networks was conducted and the highest accuracy achieved 78.33% (p < 0.0001). Our study demonstrated that the effective connectivity measures might play a more important role than functional connectivity in exploring the alterations between patients and health controls and afford a better mechanistic interpretability. Moreover, our results showed a diagnostic potential of the effective connectivity for the diagnosis of MDD patients with high accuracies allowing for earlier prevention or intervention. PMID:29515348
Thomas, Christoph; Brodoefel, Harald; Tsiflikas, Ilias; Bruckner, Friederike; Reimann, Anja; Ketelsen, Dominik; Drosch, Tanja; Claussen, Claus D; Kopp, Andreas; Heuschmid, Martin; Burgstahler, Christof
2010-02-01
To prospectively evaluate the influence of the clinical pretest probability assessed by the Morise score onto image quality and diagnostic accuracy in coronary dual-source computed tomography angiography (DSCTA). In 61 patients, DSCTA and invasive coronary angiography were performed. Subjective image quality and accuracy for stenosis detection (>50%) of DSCTA with invasive coronary angiography as gold standard were evaluated. The influence of pretest probability onto image quality and accuracy was assessed by logistic regression and chi-square testing. Correlations of image quality and accuracy with the Morise score were determined using linear regression. Thirty-eight patients were categorized into the high, 21 into the intermediate, and 2 into the low probability group. Accuracies for the detection of significant stenoses were 0.94, 0.97, and 1.00, respectively. Logistic regressions and chi-square tests showed statistically significant correlations between Morise score and image quality (P < .0001 and P < .001) and accuracy (P = .0049 and P = .027). Linear regression revealed a cutoff Morise score for a good image quality of 16 and a cutoff for a barely diagnostic image quality beyond the upper Morise scale. Pretest probability is a weak predictor of image quality and diagnostic accuracy in coronary DSCTA. A sufficient image quality for diagnostic images can be reached with all pretest probabilities. Therefore, coronary DSCTA might be suitable also for patients with a high pretest probability. Copyright 2010 AUR. Published by Elsevier Inc. All rights reserved.
Evaluation of Cobas Integra 800 under simulated routine conditions in six laboratories.
Redondo, Francisco L; Bermudez, Pilar; Cocco, Claudio; Colella, Francesca; Graziani, Maria Stella; Fiehn, Walter; Hierla, Thomas; Lemoël, Gisèle; Belliard, AnneMarie; Manene, Dieudonne; Meziani, Mourad; Liebel, Maryann; McQueen, Matthew J; Stockmann, Wolfgang
2003-03-01
The new selective access analyser Cobas Integra 800 from Roche Diagnostics was evaluated in an international multicentre study at six sites. Routine simulation experiments showed good performance and full functionality of the instrument and provocation of anomalous situations generated no problems. The new features on Cobas Integra 800, namely clot detection and dispensing control, worked according to specifications. The imprecision of Cobas Integra 800 fulfilled the proposed quality specifications regarding imprecision of analytical systems for clinical chemistry with few exceptions. Claims for linearity, drift, and carry-over were all within the defined specifications, except urea linearity. Interference exists in some cases, as could be expected due to the chemistries applied. Accuracy met the proposed quality specifications, except in some special cases. Method comparisons with Cobas Integra 700 showed good agreement; comparisons with other analysis systems yielded in several cases explicable deviations. Practicability of Cobas Integra 800 met or exceeded the requirements for more than 95% of all attributes rated. The strong points of the new analysis system were reagent handling, long stability of calibration curves, high number of tests on board, compatibility of the sample carrier to other Roche systems, and the sample integrity check for more reliable analytical results. The improvement of the workflow offered by the 5-position rack and STAT handling like on Cobas Integra 800 makes the instrument attractive for further consolidation in the medium-sized laboratory, for dedicated use of special analytes, and/or as back-up in the large routine laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno-Ramirez, L. M.; Franco, V.; Conde, A.
Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less
Calibrators measurement system for headlamp tester of motor vehicle base on machine vision
NASA Astrophysics Data System (ADS)
Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe
2014-09-01
With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.
Ogawa, Takeshi; Hirayama, Jun-Ichiro; Gupta, Pankaj; Moriya, Hiroki; Yamaguchi, Shumpei; Ishikawa, Akihiro; Inoue, Yoshihiro; Kawanabe, Motoaki; Ishii, Shin
2015-08-01
Smart houses for elderly or physically challenged people need a method to understand residents' intentions during their daily-living behaviors. To explore a new possibility, we here developed a novel brain-machine interface (BMI) system integrated with an experimental smart house, based on a prototype of a wearable near-infrared spectroscopy (NIRS) device, and verified the system in a specific task of controlling of the house's equipments with BMI. We recorded NIRS signals of three participants during typical daily-living actions (DLAs), and classified them by linear support vector machine. In our off-line analysis, four DLAs were classified at about 70% mean accuracy, significantly above the chance level of 25%, in every participant. In an online demonstration in the real smart house, one participant successfully controlled three target appliances by BMI at 81.3% accuracy. Thus we successfully demonstrated the feasibility of using NIRS-BMI in real smart houses, which will possibly enhance new assistive smart-home technologies.
A hybrid approach EMD-HW for short-term forecasting of daily stock market time series data
NASA Astrophysics Data System (ADS)
Awajan, Ahmad Mohd; Ismail, Mohd Tahir
2017-08-01
Recently, forecasting time series has attracted considerable attention in the field of analyzing financial time series data, specifically within the stock market index. Moreover, stock market forecasting is a challenging area of financial time-series forecasting. In this study, a hybrid methodology between Empirical Mode Decomposition with the Holt-Winter method (EMD-HW) is used to improve forecasting performances in financial time series. The strength of this EMD-HW lies in its ability to forecast non-stationary and non-linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy and offers a new forecasting method in time series. The daily stock market time series data of 11 countries is applied to show the forecasting performance of the proposed EMD-HW. Based on the three forecast accuracy measures, the results indicate that EMD-HW forecasting performance is superior to traditional Holt-Winter forecasting method.
El-Didamony, Akram M; Gouda, Ayman A
2011-01-01
A new highly sensitive and specific spectrofluorimetric method has been developed to determine a sympathomimetic drug pseudoephedrine hydrochloride. The present method was based on derivatization with 4-chloro-7-nitrobenzofurazan in phosphate buffer at pH 7.8 to produce a highly fluorescent product which was measured at 532 nm (excitation at 475 nm). Under the optimized conditions a linear relationship and good correlation was found between the fluorescence intensity and pseudoephedrine hydrochloride concentration in the range of 0.5-5 µg mL(-1). The proposed method was successfully applied to the assay of pseudoephedrine hydrochloride in commercial pharmaceutical formulations with good accuracy and precision and without interferences from common additives. Statistical comparison of the results with a well-established method showed excellent agreement and proved that there was no significant difference in the accuracy and precision. The stoichiometry of the reaction was determined and the reaction pathway was postulated. Copyright © 2010 John Wiley & Sons, Ltd.
Moreno-Ramirez, L. M.; Franco, V.; Conde, A.; ...
2018-02-27
Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less
NASA Astrophysics Data System (ADS)
Abdel Ghany, Maha F.; Hussein, Lobna A.; Magdy, Nancy; Yamani, Hend Z.
2016-03-01
Three spectrophotometric methods have been developed and validated for determination of indacaterol (IND) and glycopyrronium (GLY) in their binary mixtures and novel pharmaceutical dosage form. The proposed methods are considered to be the first methods to determine the investigated drugs simultaneously. The developed methods are based on different signal processing techniques of ratio spectra namely; Numerical Differentiation (ND), Savitsky-Golay (SG) and Fourier Transform (FT). The developed methods showed linearity over concentration range 1-30 and 10-35 (μg/mL) for IND and GLY, respectively. The accuracy calculated as percentage recoveries were in the range of 99.00%-100.49% with low value of RSD% (< 1.5%) demonstrating an excellent accuracy of the proposed methods. The developed methods were proved to be specific, sensitive and precise for quality control of the investigated drugs in their pharmaceutical dosage form without the need for any separation process.
Xia, Hui; Zhang, Wen; Li, Yingjie; Yu, Changhai
2015-05-01
The aim of the present study was to investigate the concentration of cisplatin in different layers of the visceral pleura in rats, following drug administration. In this study, a sensitive and specific liquid chromatography method coupled with electrospray ionization-tandem mass spectrometry was established to investigate the disposition of cisplatin in different layers of the visceral pleura in rats. Methodological data, including specificity, linearity, accuracy, recovery, precision and lower limits of quantification, confirmed that this novel method may be used to efficiently quantify the cisplatin concentrations in visceral pleura of rats following administration of the drug. Furthermore, the results demonstrated that the desired drug concentration was not achieved in the outer or inner elastic layers of the visceral pleura following injection with cisplatin through various administration methods.
Accuracy of CT-based attenuation correction in PET/CT bone imaging
NASA Astrophysics Data System (ADS)
Abella, Monica; Alessio, Adam M.; Mankoff, David A.; MacDonald, Lawrence R.; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E.
2012-05-01
We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a 68Ga/68Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9 ± 0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers ranges from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important.
NASA Astrophysics Data System (ADS)
Sung, Changhyuck; Lim, Seokjae; Kim, Hyungjun; Kim, Taesu; Moon, Kibong; Song, Jeonghwan; Kim, Jae-Joon; Hwang, Hyunsang
2018-03-01
To improve the classification accuracy of an image data set (CIFAR-10) by using analog input voltage, synapse devices with excellent conductance linearity (CL) and multi-level cell (MLC) characteristics are required. We analyze the CL and MLC characteristics of TaOx-based filamentary resistive random access memory (RRAM) to implement the synapse device in neural network hardware. Our findings show that the number of oxygen vacancies in the filament constriction region of the RRAM directly controls the CL and MLC characteristics. By adopting a Ta electrode (instead of Ti) and the hot-forming step, we could form a dense conductive filament. As a result, a wide range of conductance levels with CL is achieved and significantly improved image classification accuracy is confirmed.
A minimax technique for time-domain design of preset digital equalizers using linear programming
NASA Technical Reports Server (NTRS)
Vaughn, G. L.; Houts, R. C.
1975-01-01
A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.
Analysis of Slope Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2005-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.
Parvari, R; Pecht, I; Soreq, H
1983-09-01
A highly sensitive microfluorometric assay for cholinesterases has been developed. Enzymatic activity is measured by monitoring the thiocholine produced by specific hydrolysis of acetylthiocholine. This is carried out by reacting the thiocholine formed with the fluorogenic compound N-(4(7 diethylamino-4-methylcoumarin-3-yl)phenyl)maleimide to yield an intensely fluorescent product. The assay is linear over a range extending from a few picomoles to nanomoles of thiocholine. The specificity and accuracy of this microfluorometric assay were examined using microgram quantities of rat brain tissue as a source for cholinesterases. The specific activities and the Km values determined by this new method for both cholinesterase activities present in the brain (acetylcholine hydrolase, EC 3.1.1.7, and "nonspecific" cholinesterase-acylcholine acylhydrolase, EC 3.1.1.8) were identical to those reported earlier using the less sensitive spectrophotometric and radiometric methods. The background emission caused by nonenzymatic hydrolysis of the substrate is relatively low, and does not exceed background values encountered in other methods. The assay may be used for monitoring the kinetics of enzymatic activities in microscale reaction mixtures, providing a linear determination of the thiocholine produced over a period of at least 30 h at room temperature. The method can also be adapted for use in other enzymatic assays where reagents containing thiol groups can be produced or consumed.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-06-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-01-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection. PMID:24518889
Azougagh, M; Elkarbane, M; Bakhous, K; Issmaili, S; Skalli, A; Iben Moussad, S; Benaji, B
2016-09-01
An innovative simple, fast, precise and accurate ultra-high performance liquid chromatography (UPLC) method was developed for the determination of diclofenac (Dic) along with its impurities including the new dimer impurity in various pharmaceutical dosage forms. An Acquity HSS T3 (C18, 100×2.1mm, 1.8μm) column in gradient mode was used with mobile phase comprising of phosphoric acid, which has a pH value of 2.3 and methanol. The flow rate and the injection volume were set at 0.35ml·min(-1) and 1μl, respectively, and the UV detection was carried out at 254nm by using photodiode array detector. Dic was subjected to stress conditions from acid, base, hydrolytic, thermal, oxidative and photolytic degradation. The new developed method was successfully validated in accordance to the International Conference on Harmonization (ICH) guidelines with respect to specificity, limit of detection, limit of quantitation, precision, linearity, accuracy and robustness. The degradation products were well resolved from main peak and its seven impurities, proving the specificity power of the method. The method showed good linearity with consistent recoveries for Dic content and its impurities. The relative percentage of standard deviation obtained for the repeatability and intermediate precision experiments was less than 3% and LOQ was less than 0.5μg·ml(-1) for all compounds. The new proposed method was found to be accurate, precise, specific, linear and robust. In addition, the method was successfully applied for the assay determination of Dic and its impurities in the several pharmaceutical dosage forms. Copyright © 2016 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...
Analytical and Clinical Performance of Blood Glucose Monitors
Boren, Suzanne Austin; Clarke, William L.
2010-01-01
Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
2011-01-01
Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG) signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2) values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS) was shown to have high isometric torque estimation accuracy combined with very short training times. PMID:21943179
Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)
NASA Astrophysics Data System (ADS)
Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.
2018-05-01
A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.
Kaimakamis, Evangelos; Tsara, Venetia; Bratsas, Charalambos; Sichletidis, Lazaros; Karvounis, Charalambos; Maglaveras, Nikolaos
2016-01-01
Obstructive Sleep Apnea (OSA) is a common sleep disorder requiring the time/money consuming polysomnography for diagnosis. Alternative methods for initial evaluation are sought. Our aim was the prediction of Apnea-Hypopnea Index (AHI) in patients potentially suffering from OSA based on nonlinear analysis of respiratory biosignals during sleep, a method that is related to the pathophysiology of the disorder. Patients referred to a Sleep Unit (135) underwent full polysomnography. Three nonlinear indices (Largest Lyapunov Exponent, Detrended Fluctuation Analysis and Approximate Entropy) extracted from two biosignals (airflow from a nasal cannula, thoracic movement) and one linear derived from Oxygen saturation provided input to a data mining application with contemporary classification algorithms for the creation of predictive models for AHI. A linear regression model presented a correlation coefficient of 0.77 in predicting AHI. With a cutoff value of AHI = 8, the sensitivity and specificity were 93% and 71.4% in discrimination between patients and normal subjects. The decision tree for the discrimination between patients and normal had sensitivity and specificity of 91% and 60%, respectively. Certain obtained nonlinear values correlated significantly with commonly accepted physiological parameters of people suffering from OSA. We developed a predictive model for the presence/severity of OSA using a simple linear equation and additional decision trees with nonlinear features extracted from 3 respiratory recordings. The accuracy of the methodology is high and the findings provide insight to the underlying pathophysiology of the syndrome. Reliable predictions of OSA are possible using linear and nonlinear indices from only 3 respiratory signals during sleep. The proposed models could lead to a better study of the pathophysiology of OSA and facilitate initial evaluation/follow up of suspected patients OSA utilizing a practical low cost methodology. ClinicalTrials.gov NCT01161381.
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.
Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y
2016-04-01
To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.
Jedenmalm, Anneli; Noz, Marilyn E; Olivecrona, Henrik; Olivecrona, Lotta; Stark, Andre
2008-04-01
Polyethylene wear is an important cause of aseptic loosening in hip arthroplasty. Detection of significant wear usually happens late on, since available diagnostic techniques are either not sensitive enough or too complicated and expensive for routine use. This study evaluates a new approach for measurement of linear wear of metal-backed acetabular cups using CT as the intended clinically feasible method. 8 retrieved uncemented metal-backed acetabular cups were scanned twice ex vivo using CT. The linear penetration depth of the femoral head into the cup was measured in the CT volumes using dedicated software. Landmark points were placed on the CT images of cup and head, and also on a reference plane in order to calculate the wear vector magnitude and angle to one of the axes. A coordinate-measuring machine was used to test the accuracy of the proposed CT method. For this purpose, the head diameters were also measured by both methods. Accuracy of the CT method for linear wear measurements was 0.6 mm and wear vector angle was 27 degrees . No systematic difference was found between CT scans. This study on explanted acetabular cups shows that CT is capable of reliable measurement of linear wear in acetabular cups at a clinically relevant level of accuracy. It was also possible to use the method for assessment of direction of wear.
Linear combination methods to improve diagnostic/prognostic accuracy on future observations
Kang, Le; Liu, Aiyi; Tian, Lili
2014-01-01
Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714
Variations of archived static-weight data and WIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; Gillmann, R.; Kent, P.M.
1998-12-01
Using seven-card archived, static-weight and weigh-in-motion (WIM), truck data received by FHWA for 1966--1992, the authors examine the fluctuations of four fiducial weight measures reported at weight sites in the 50 states. The reduced 172 MB Class 9 (332000) database was prepared and ordered from 2 CD-ROMS with duplicate records removed. Front-axle weight and gross-vehicle weight (GVW) are combined conceptually by determining the front axle weight in four-quartile GVW categories. The four categories of front axle weight from the four GVW categories are combined in four ways. Three linear combinations are with fixed-coefficient fiducials and one is that optimal linearmore » combination producing the smallest standard deviation to mean value ratio. The best combination gives coefficients of variation of 2--3% for samples of 100 trucks, below the expected accuracy of single-event WIM measurements. Time tracking of data shows some high-variation sites have seasonal variations, or linear variations over the time-ordered samples. Modeling of these effects is very site specific but provides a way to reduce high variations. Some automatic calibration schemes would erroneously remove such seasonal or linear variations were they static effects.« less
Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.
Zana, F; Klein, J C
2001-01-01
This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.
Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong
2014-07-01
Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lee, Ga-Young; Kim, Jeonghun; Kim, Ju Han; Kim, Kiwoong; Seong, Joon-Kyung
2014-01-01
Mobile healthcare applications are becoming a growing trend. Also, the prevalence of dementia in modern society is showing a steady growing trend. Among degenerative brain diseases that cause dementia, Alzheimer disease (AD) is the most common. The purpose of this study was to identify AD patients using magnetic resonance imaging in the mobile environment. We propose an incremental classification for mobile healthcare systems. Our classification method is based on incremental learning for AD diagnosis and AD prediction using the cortical thickness data and hippocampus shape. We constructed a classifier based on principal component analysis and linear discriminant analysis. We performed initial learning and mobile subject classification. Initial learning is the group learning part in our server. Our smartphone agent implements the mobile classification and shows various results. With use of cortical thickness data analysis alone, the discrimination accuracy was 87.33% (sensitivity 96.49% and specificity 64.33%). When cortical thickness data and hippocampal shape were analyzed together, the achieved accuracy was 87.52% (sensitivity 96.79% and specificity 63.24%). In this paper, we presented a classification method based on online learning for AD diagnosis by employing both cortical thickness data and hippocampal shape analysis data. Our method was implemented on smartphone devices and discriminated AD patients for normal group.
Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.
Maryn, Youri; Zarowski, Andrzej
2015-11-01
Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
NASA Astrophysics Data System (ADS)
Curtis, Tyler E.; Roeder, Ryan K.
2017-03-01
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
Navigation Strategies for Primitive Solar System Body Rendezvous and Proximity Operations
NASA Technical Reports Server (NTRS)
Getzandanner, Kenneth M.
2011-01-01
A wealth of scientific knowledge regarding the composition and evolution of the solar system can be gained through reconnaissance missions to primitive solar system bodies. This paper presents analysis of a baseline navigation strategy designed to address the unique challenges of primitive body navigation. Linear covariance and Monte Carlo error analysis was performed on a baseline navigation strategy using simulated data from a· design reference mission (DRM). The objective of the DRM is to approach, rendezvous, and maintain a stable orbit about the near-Earth asteroid 4660 Nereus. The outlined navigation strategy and resulting analyses, however, are not necessarily limited to this specific target asteroid as they may he applicable to a diverse range of mission scenarios. The baseline navigation strategy included simulated data from Deep Space Network (DSN) radiometric tracking and optical image processing (OpNav). Results from the linear covariance and Monte Carlo analyses suggest the DRM navigation strategy is sufficient to approach and perform proximity operations in the vicinity of the target asteroid with meter-level accuracy.
CHAM: a fast algorithm of modelling non-linear matter power spectrum in the sCreened HAlo Model
NASA Astrophysics Data System (ADS)
Hu, Bin; Liu, Xue-Wen; Cai, Rong-Gen
2018-05-01
We present a fast numerical screened halo model algorithm (CHAM, which stands for the sCreened HAlo Model) for modelling non-linear power spectrum for the alternative models to Λ cold dark matter. This method has three obvious advantages. First of all, it is not being restricted to a specific dark energy/modified gravity model. In principle, all of the screened scalar-tensor theories can be applied. Secondly, the least assumptions are made in the calculation. Hence, the physical picture is very easily understandable. Thirdly, it is very predictable and does not rely on the calibration from N-body simulation. As an example, we show the case of the Hu-Sawicki f(R) gravity. In this case, the typical CPU time with the current parallel PYTHON script (eight threads) is roughly within 10 min. The resulting spectra are in a good agreement with N-body data within a few percentage accuracy up to k ˜ 1 h Mpc-1.
Stabilization Approaches for Linear and Nonlinear Reduced Order Models
NASA Astrophysics Data System (ADS)
Rezaian, Elnaz; Wei, Mingjun
2017-11-01
It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.
2013-01-01
Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1994-01-01
A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.
Khanmohammadi, Mohammadreza; Bagheri Garmarudi, Amir; Samani, Simin; Ghasemi, Keyvan; Ashuri, Ahmad
2011-06-01
Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) microspectroscopy was applied for detection of colon cancer according to the spectral features of colon tissues. Supervised classification models can be trained to identify the tissue type based on the spectroscopic fingerprint. A total of 78 colon tissues were used in spectroscopy studies. Major spectral differences were observed in 1,740-900 cm(-1) spectral region. Several chemometric methods such as analysis of variance (ANOVA), cluster analysis (CA) and linear discriminate analysis (LDA) were applied for classification of IR spectra. Utilizing the chemometric techniques, clear and reproducible differences were observed between the spectra of normal and cancer cases, suggesting that infrared microspectroscopy in conjunction with spectral data processing would be useful for diagnostic classification. Using LDA technique, the spectra were classified into cancer and normal tissue classes with an accuracy of 95.8%. The sensitivity and specificity was 100 and 93.1%, respectively.
Evaluation of airborne lidar data to predict vegetation Presence/Absence
Palaseanu-Lovejoy, M.; Nayegandhi, A.; Brock, J.; Woodman, R.; Wright, C.W.
2009-01-01
This study evaluates the capabilities of the Experimental Advanced Airborne Research Lidar (EAARL) in delineating vegetation assemblages in Jean Lafitte National Park, Louisiana. Five-meter-resolution grids of bare earth, canopy height, canopy-reflection ratio, and height of median energy were derived from EAARL data acquired in September 2006. Ground-truth data were collected along transects to assess species composition, canopy cover, and ground cover. To decide which model is more accurate, comparisons of general linear models and generalized additive models were conducted using conventional evaluation methods (i.e., sensitivity, specificity, Kappa statistics, and area under the curve) and two new indexes, net reclassification improvement and integrated discrimination improvement. Generalized additive models were superior to general linear models in modeling presence/absence in training vegetation categories, but no statistically significant differences between the two models were achieved in determining the classification accuracy at validation locations using conventional evaluation methods, although statistically significant improvements in net reclassifications were observed. ?? 2009 Coastal Education and Research Foundation.
NASA Astrophysics Data System (ADS)
Chen, Xue; Li, Xiaohui; Yu, Xin; Chen, Deying; Liu, Aichun
2018-01-01
Diagnosis of malignancies is a challenging clinical issue. In this work, we present quick and robust diagnosis and discrimination of lymphoma and multiple myeloma (MM) using laser-induced breakdown spectroscopy (LIBS) conducted on human serum samples, in combination with chemometric methods. The serum samples collected from lymphoma and MM cancer patients and healthy controls were deposited on filter papers and ablated with a pulsed 1064 nm Nd:YAG laser. 24 atomic lines of Ca, Na, K, H, O, and N were selected for malignancy diagnosis. Principal component analysis (PCA), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and k nearest neighbors (kNN) classification were applied to build the malignancy diagnosis and discrimination models. The performances of the models were evaluated using 10-fold cross validation. The discrimination accuracy, confusion matrix and receiver operating characteristic (ROC) curves were obtained. The values of area under the ROC curve (AUC), sensitivity and specificity at the cut-points were determined. The kNN model exhibits the best performances with overall discrimination accuracy of 96.0%. Distinct discrimination between malignancies and healthy controls has been achieved with AUC, sensitivity and specificity for healthy controls all approaching 1. For lymphoma, the best discrimination performance values are AUC = 0.990, sensitivity = 0.970 and specificity = 0.956. For MM, the corresponding values are AUC = 0.986, sensitivity = 0.892 and specificity = 0.994. The results show that the serum-LIBS technique can serve as a quick, less invasive and robust method for diagnosis and discrimination of human malignancies.
NASA Astrophysics Data System (ADS)
Szuflitowska, B.; Orlowski, P.
2017-08-01
Automated detection system consists of two key steps: extraction of features from EEG signals and classification for detection of pathology activity. The EEG sequences were analyzed using Short-Time Fourier Transform and the classification was performed using Linear Discriminant Analysis. The accuracy of the technique was tested on three sets of EEG signals: epilepsy, healthy and Alzheimer's Disease. The classification error below 10% has been considered a success. The higher accuracy are obtained for new data of unknown classes than testing data. The methodology can be helpful in differentiation epilepsy seizure and disturbances in the EEG signal in Alzheimer's Disease.
Linearity-Preserving Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael; Murman, Scott
2004-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. We note that on non-uniform grids the scalar formulation in standard use today sacrifices k-exactness, even for linear solutions, impacting both accuracy and convergence. We rewrite some well-known limiters in a n way to highlight their underlying symmetry, and use this to examine both traditional and novel limiter formulations. A consistent method of handling stretched meshes is developed, as is a new directional formulation in multiple dimensions for irregular grids. Results are presented demonstrating improved accuracy and convergence using a combination of model problems and complex three-dimensional examples.
Online detecting system of roller wear based on laser-linear array CCD technology
NASA Astrophysics Data System (ADS)
Guo, Yuan
2010-10-01
Roller is an important metallurgy tool in the rolling mill. And the surface of a roller affects the quantity of the rolling product directly. After using a period of time, roller must be repaired or replaced. Examining the profile of a working roller between the intervals of rolling is called online detecting for roller wear. The study of online detecting roller wear is very important for selecting the grinding time in reason, reducing the exchanging times of rollers, improving the quality of the product and realizing online grinding rollers. By applying the laser-linear array CCD detective technology, a method for online non-touch detecting roller wear was brought forward. The principle, composition and the operation process of the linear array CCD detecting system were expatiated. And an error compensation algorithm is exactly calculated to offset the shift of the roller axis in this measurement system. So the stability and the accuracy were improved remarkably. The experiment proves that the accuracy of the detecting system reaches to the demand of practical production process. It can provide a new method of high speed and high accuracy online detecting for roller wear.
Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S
2008-01-01
This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.
Fast algorithms for Quadrature by Expansion I: Globally valid expansions
NASA Astrophysics Data System (ADS)
Rachh, Manas; Klöckner, Andreas; O'Neil, Michael
2017-09-01
The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.
Number Games, Magnitude Representation, and Basic Number Skills in Preschoolers
ERIC Educational Resources Information Center
Whyte, Jemma Catherine; Bull, Rebecca
2008-01-01
The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was…
NASA Astrophysics Data System (ADS)
Larin, Kirill V.
Approximately 14 million people in the USA and more than 140 million people worldwide suffer from diabetes mellitus. The current glucose sensing technique involves a finger puncture several times a day to obtain a droplet of blood for analysis. There have been enormous efforts by many scientific groups and companies to quantify glucose concentration noninvasively using different optical techniques. However, these techniques face limitations associated with low sensitivity, accuracy, and insufficient specificity of glucose concentrations over a physiological range. Optical coherence tomography (OCT), a new technology, is being applied for noninvasive imaging in tissues with high resolution. OCT utilizes sensitive detection of photons coherently scattered from tissue. The high resolution of this technique allows for exceptionally accurate measurement of tissue scattering from a specific layer of skin compared with other optical techniques and, therefore, may provide noninvasive and continuous monitoring of blood glucose concentration with high accuracy. In this dissertation work I experimentally and theoretically investigate feasibility of noninvasive, real-time, sensitive, and specific monitoring of blood glucose concentration using an OCT-based biosensor. The studies were performed in scattering media with stable optical properties (aqueous suspensions of polystyrene microspheres and milk), animals (New Zealand white rabbits and Yucatan micropigs), and normal subjects (during oral glucose tolerance tests). The results of these studies demonstrated: (1) capability of the OCT technique to detect changes in scattering coefficient with the accuracy of about 1.5%; (2) a sharp and linear decrease of the OCT signal slope in the dermis with the increase of blood glucose concentration; (3) the change in the OCT signal slope measured during bolus glucose injection experiments (characterized by a sharp increase of blood glucose concentration) is higher than that measured in the glucose clamping experiments (characterized by slow, controlled increase of the blood glucose concentration); and (4) the accuracy of glucose concentration monitoring may substantially be improved if optimal dimensions of the probed skin area are used. The results suggest that high-resolution OCT technique has a potential for noninvasive, accurate, and continuous glucose monitoring with high sensitivity.
A new algorithm for microwave delay estimation from water vapor radiometer data
NASA Technical Reports Server (NTRS)
Robinson, S. E.
1986-01-01
A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce high accuracy. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution which is then explicitly integrated to estimate wet path delays, in a second step. The intrinsic accuracy of this algorithm has been examined for two channel WVR data using path delays and stimulated observables computed from archived radiosonde data. It is found that annual RMS errors for a wide range of sites are in the range from 1.3 mm to 2.3 mm, in the absence of clouds. This is comparable to the best overall accuracy obtainable from conventional linear algorithms, which must be tailored to site and weather conditions using large radiosonde data bases. The new algorithm's accuracy and flexibility are indications that it may be a good candidate for almost all WVR data interpretation.
Toward improving fine needle aspiration cytology by applying Raman microspectroscopy
NASA Astrophysics Data System (ADS)
Becker-Putsche, Melanie; Bocklitz, Thomas; Clement, Joachim; Rösch, Petra; Popp, Jürgen
2013-04-01
Medical diagnosis of biopsies performed by fine needle aspiration has to be very reliable. Therefore, pathologists/cytologists need additional biochemical information on single cancer cells for an accurate diagnosis. Accordingly, we applied three different classification models for discriminating various features of six breast cancer cell lines by analyzing Raman microspectroscopic data. The statistical evaluations are implemented by linear discriminant analysis (LDA) and support vector machines (SVM). For the first model, a total of 61,580 Raman spectra from 110 single cells are discriminated at the cell-line level with an accuracy of 99.52% using an SVM. The LDA classification based on Raman data achieved an accuracy of 94.04% by discriminating cell lines by their origin (solid tumor versus pleural effusion). In the third model, Raman cell spectra are classified by their cancer subtypes. LDA results show an accuracy of 97.45% and specificities of 97.78%, 99.11%, and 98.97% for the subtypes basal-like, HER2+/ER-, and luminal, respectively. These subtypes are confirmed by gene expression patterns, which are important prognostic features in diagnosis. This work shows the applicability of Raman spectroscopy and statistical data handling in analyzing cancer-relevant biochemical information for advanced medical diagnosis on the single-cell level.
A novel finite volume discretization method for advection-diffusion systems on stretched meshes
NASA Astrophysics Data System (ADS)
Merrick, D. G.; Malan, A. G.; van Rooyen, J. A.
2018-06-01
This work is concerned with spatial advection and diffusion discretization technology within the field of Computational Fluid Dynamics (CFD). In this context, a novel method is proposed, which is dubbed the Enhanced Taylor Advection-Diffusion (ETAD) scheme. The model equation employed for design of the scheme is the scalar advection-diffusion equation, the industrial application being incompressible laminar and turbulent flow. Developed to be implementable into finite volume codes, ETAD places specific emphasis on improving accuracy on stretched structured and unstructured meshes while considering both advection and diffusion aspects in a holistic manner. A vertex-centered structured and unstructured finite volume scheme is used, and only data available on either side of the volume face is employed. This includes the addition of a so-called mesh stretching metric. Additionally, non-linear blending with the existing NVSF scheme was performed in the interest of robustness and stability, particularly on equispaced meshes. The developed scheme is assessed in terms of accuracy - this is done analytically and numerically, via comparison to upwind methods which include the popular QUICK and CUI techniques. Numerical tests involved the 1D scalar advection-diffusion equation, a 2D lid driven cavity and turbulent flow case. Significant improvements in accuracy were achieved, with L2 error reductions of up to 75%.
Robust coordinated control of a dual-arm space robot
NASA Astrophysics Data System (ADS)
Shi, Lingling; Kayastha, Sharmila; Katupitiya, Jay
2017-09-01
Dual-arm space robots are more capable of implementing complex space tasks compared with single arm space robots. However, the dynamic coupling between the arms and the base will have a serious impact on the spacecraft attitude and the hand motion of each arm. Instead of considering one arm as the mission arm and the other as the balance arm, in this work two arms of the space robot perform as mission arms aimed at accomplishing secure capture of a floating target. The paper investigates coordinated control of the base's attitude and the arms' motion in the task space in the presence of system uncertainties. Two types of controllers, i.e. a Sliding Mode Controller (SMC) and a nonlinear Model Predictive Controller (MPC) are verified and compared with a conventional Computed-Torque Controller (CTC) through numerical simulations in terms of control accuracy and system robustness. Both controllers eliminate the need to linearly parameterize the dynamic equations. The MPC has been shown to achieve performance with higher accuracy than CTC and SMC in the absence of system uncertainties under the condition that they consume comparable energy. When the system uncertainties are included, SMC and CTC present advantageous robustness than MPC. Specifically, in a case where system inertia increases, SMC delivers higher accuracy than CTC and costs the least amount of energy.
Theoretical algorithms for satellite-derived sea surface temperatures
NASA Astrophysics Data System (ADS)
Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.
1989-03-01
Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.
Dyslexia and reasoning: the importance of visual processes.
Bacon, Alison M; Handley, Simon J
2010-08-01
Recent research has suggested that individuals with dyslexia rely on explicit visuospatial representations for syllogistic reasoning while most non-dyslexics opt for an abstract verbal strategy. This paper investigates the role of visual processes in relational reasoning amongst dyslexic reasoners. Expt 1 presents written and verbal protocol evidence to suggest that reasoners with dyslexia generate detailed representations of relational properties and use these to make a visual comparison of objects. Non-dyslexics use a linear array of objects to make a simple transitive inference. Expt 2 examined evidence for the visual-impedance effect which suggests that visual information detracts from reasoning leading to longer latencies and reduced accuracy. While non-dyslexics showed the impedance effects predicted, dyslexics showed only reduced accuracy on problems designed specifically to elicit imagery. Expt 3 presented problems with less semantically and visually rich content. The non-dyslexic group again showed impedance effects, but dyslexics did not. Furthermore, in both studies, visual memory predicted reasoning accuracy for dyslexic participants, but not for non-dyslexics, particularly on problems with highly visual content. The findings are discussed in terms of the importance of visual and semantic processes in reasoning for individuals with dyslexia, and we argue that these processes play a compensatory role, offsetting phonological and verbal memory deficits.
NASA Astrophysics Data System (ADS)
Notaro, V.; Armstrong, J. W.; Asmar, S.; Di Ruscio, A.; Iess, L.; Mariani, M., Jr.
2017-12-01
Precise measurements of spacecraft range rate, enabled by two-way microwave links, are used in radio science experiments for planetary geodesy including the determination of planetary gravitational fields for the purpose of modeling the interior structure. The final accuracies in the estimated gravity harmonic coefficients depend almost linearly on the Doppler noise in the link. We ran simulations to evaluate the accuracy improvement attainable in the estimation of the gravity harmonic coefficients of Venus (with a representative orbiter) and Mercury (with the BepiColombo spacecraft), using our proposed innovative noise-cancellation technique. We showed how the use of an additional, smaller and stiffer, receiving-only antenna could reduce the leading noise sources in a Ka-band two-way link such as tropospheric and antenna mechanical noises. This is achieved through a suitable linear combination (LC) of Doppler observables collected at the two antennas at different times. In our simulations, we considered a two-way link either from NASA's DSS 25 antenna in California or from ESA's DSA-3 antenna in Malargüe (Argentina). Moreover, we selected the 12-m Atacama Pathfinder EXperiment (APEX) in Chile as the three-way antenna and developed its tropospheric noise model using available atmospheric data and mechanical stability specifications. For an 8-hour Venus orbiter tracking pass in Chajnantor's winter/night conditions, the accuracy of the simulated LC Doppler observable at 10-s integration time is 6 mm/s, to be compared to 23 mm/s for the two-way link. For BepiColombo, we obtained 16.5 mm/s and 35 mm/s, respectively for the LC and two-way links. The benefits are even larger at longer time scales. Numerical simulations indicate that such noise reduction would provide significant improvements in the determination of Venus's and Mercury's gravity field coefficients. If implemented, this noise-reducing technique will be valuable for planetary geodesy missions, where the accuracy in the estimation of high-order gravity harmonic coefficients is limited by tropospheric and antenna mechanical noises that are difficult to reduce at short integration times. Benefits are however expected in all precision radio science experiments with deep space probes.
Azimuth-invariant mueller-matrix differentiation of the optical anisotropy of biological tissues
NASA Astrophysics Data System (ADS)
Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu. F.; Pashkovskaya, N. V.; Andreichuk, D. R.
2014-07-01
A Mueller-matrix model is proposed for analysis of the optical anisotropy of protein networks of optically thin nondepolarizing layers of biological tissues with allowance for birefringence and dichroism. The model is used to construct algorithms for reconstruction of coordinate distributions of phase shifts and coefficient of linear dichroism. Objective criteria for differentiation of benign and malignant tissues of female genitals are formulated in the framework of the statistical analysis of such distributions. Approaches of evidence-based medicine are used to determine the working characteristics (sensitivity, specificity, and accuracy) of the Mueller-matrix method for the reconstruction of the parameters of optical anisotropy and show its efficiency in the differentiation of benign and malignant tumors.
A tilt and roll device for automated correction of rotational setup errors.
Hornick, D C; Litzenberg, D W; Lam, K L; Balter, J M; Hetrick, J; Ten Haken, R K
1998-09-01
A tilt and roll device has been developed to add two additional degrees of freedom to an existing treatment table. This device allows computer-controlled rotational motion about the inferior-superior and left-right patient axes. The tilt and roll device comprises three supports between the tabletop and base. An automotive type universal joint welded to the end of a steel pipe supports the center of the table. Two computer-controlled linear electric actuators utilizing high accuracy stepping motors support the foot of table and control the tilt and roll of the tabletop. The current system meets or exceeds all pre-design specifications for precision, weight capacity, rigidity, and range of motion.
NASA Astrophysics Data System (ADS)
Sharma, Abhiraj; Suryanarayana, Phanish
2018-05-01
We present an accurate and efficient real-space Density Functional Theory (DFT) framework for the ab initio study of non-orthogonal crystal systems. Specifically, employing a local reformulation of the electrostatics, we develop a novel Kronecker product formulation of the real-space kinetic energy operator that significantly reduces the number of operations associated with the Laplacian-vector multiplication, the dominant cost in practical computations. In particular, we reduce the scaling with respect to finite-difference order from quadratic to linear, thereby significantly bridging the gap in computational cost between non-orthogonal and orthogonal systems. We verify the accuracy and efficiency of the proposed methodology through selected examples.
An effective description of dark matter and dark energy in the mildly non-linear regime
Lewandowski, Matthew; Maleknejad, Azadeh; Senatore, Leonardo
2017-05-18
In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. Furthermore, the Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less
Böcker, K B E; Gerritsen, J; Hunault, C C; Kruidenier, M; Mensinga, Tj T; Kenemans, J L
2010-07-01
Cannabis intake has been reported to affect cognitive functions such as selective attention. This study addressed the effects of exposure to cannabis with up to 69.4mg Delta(9)-tetrahydrocannabinol (THC) on Event-Related Potentials (ERPs) recorded during a visual selective attention task. Twenty-four participants smoked cannabis cigarettes with four doses of THC on four test days in a randomized, double blind, placebo-controlled, crossover study. Two hours after THC exposure the participants performed a visual selective attention task and concomitant ERPs were recorded. Accuracy decreased linearly and reaction times increased linearly with THC dose. However, performance measures and most of the ERP components related specifically to selective attention did not show significant dose effects. Only in relatively light cannabis users the Occipital Selection Negativity decreased linearly with dose. Furthermore, ERP components reflecting perceptual processing, as well as the P300 component, decreased in amplitude after THC exposure. Only the former effect showed a linear dose-response relation. The decrements in performance and ERP amplitudes induced by exposure to cannabis with high THC content resulted from a non-selective decrease in attentional or processing resources. Performance requiring attentional resources, such as vehicle control, may be compromised several hours after smoking cannabis cigarettes containing high doses of THC, as presently available in Europe and Northern America. Copyright 2010 Elsevier Inc. All rights reserved.
Weaver, Brian Thomas; Fitzsimons, Kathleen; Braman, Jerrod; Haut, Roger
2016-09-01
The goal of the current study was to expand on previous work to validate the use of pressure insole technology in conjunction with linear regression models to predict the free torque at the shoe-surface interface that is generated while wearing different athletic shoes. Three distinctly different shoe designs were utilised. The stiffness of each shoe was determined with a material's testing machine. Six participants wore each shoe that was fitted with an insole pressure measurement device and performed rotation trials on an embedded force plate. A pressure sensor mask was constructed from those sensors having a high linear correlation with free torque values. Linear regression models were developed to predict free torques from these pressure sensor data. The models were able to accurately predict their own free torque well (RMS error 3.72 ± 0.74 Nm), but not that of the other shoes (RMS error 10.43 ± 3.79 Nm). Models performing self-prediction were also able to measure differences in shoe stiffness. The results of the current study showed the need for participant-shoe specific linear regression models to insure high prediction accuracy of free torques from pressure sensor data during isolated internal and external rotations of the body with respect to a planted foot.
An effective description of dark matter and dark energy in the mildly non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Maleknejad, Azadeh; Senatore, Leonardo
In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. Furthermore, the Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less
An effective description of dark matter and dark energy in the mildly non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Senatore, Leonardo; Maleknejad, Azadeh, E-mail: matthew.lewandowski@cea.fr, E-mail: azade@ipm.ir, E-mail: senatore@stanford.edu
In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the informationmore » resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. The Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.« less
An ultraviolet-spectrophotometric method for the determination of glimepiride in solid dosage forms.
Afieroho, Ozadheoghene E; Okorie, Ogbonna; Okonkwo, Tochukwu J N
2011-06-01
Considering the cost of acquiring a liquid chromatographic instrument in underdeveloped economies, the rising incidence of diabetes mellitus, the need to evaluate the quality performance of glimepiride generics, and the need for less toxic processes, this research is an imperative. The method was validated for linearity, recovery accuracy, intra- and inter-day precision, specificity in the presence of excipients, and inter-day stability under laboratory conditions. Student's t test at the 95% confidence limit was used for statistics. Using 96% ethanol as solvent, a less toxic and cost-effective spectrophotometric method for the determination of glimepiride in solid dosage forms was developed and validated. The results of the validated parameters showed a λ(max) of 231 nm, linearity range of 0.5-22 μg/mL, precision with relative SD of <1.0%, recovery accuracy of 100.8%, regression equation of y = 45.741x + 0.0202, R(2) = 0.999, limit of detection of 0.35 μg/mL, and negligible interference from common excipients and colorants. The method was found to be accurate at the 95% confidence limit compared with the standard liquid chromatographic method with comparable reproducibility when used to assay the formulated products Amaryl(®) (sanofi-aventis, Paris, France) and Mepyril(®) (May & Baker Nigeria PLC, Ikeja, Nigeria). The results obtained for the validated parameters were within allowable limits. This method is recommended for routine quality control analysis.
El-Bagary, Ramzia I; Elkady, Ehab F; Farid, Naira A; Youssef, Nadia F
2017-03-05
Apixaban and Tirofiban Hydrochloride are low molecular weight anticoagulants. The two drugs exhibit native fluorescence that allow the development of simple and valid spectrofluorimetric methods for the determination of Apixaban at λ ex/λ em=284/450nm and tirofiban HCl at λ ex/λ em=227/300nm in aqueous media. Different experimental parameters affecting fluorescence intensities were carefully studied and optimized. The fluorescence intensity-concentration plots were linear over the ranges of 0.2-6μgml -1 for apixaban and 0.2-5μgml -1 for tirofiban HCl. The limits of detection were 0.017 and 0.019μgml -1 and quantification limits were 0.057 and 0.066μgml -1 for apixaban and tirofiban HCl, respectively. The fluorescence quantum yield of apixaban and tirofiban were calculated with values of 0.43 and 0.49. Method validation was evaluated for linearity, specificity, accuracy, precision and robustness as per ICH guidelines. The proposed spectrofluorimetric methods were successfully applied for the determination of apixaban in Eliquis tablets and tirofiban HCl in Aggrastat intravenous infusion. Tolerance ratio was tested to study the effect of foreign interferences from dosage forms excipients. Using Student's t and F tests, revealed no statistically difference between the developed spectrofluorimetric methods and the comparison methods regarding the accuracy and precision, so can be contributed to the analysis of apixaban and tirofiban HCl in QC laboratories as an alternative method. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Yun-Qing; Li, Cheng-Jian; Lv, Lei; Cao, Qing-Qing; Qian, Xian; Li, Si Wei; Wang, Hui; Zhao, Liang
2018-06-01
Stellera chamaejasme L. has been used as a traditional Chinese medicine for the treatment of scabies, tinea, stubborn skin ulcers, chronic tracheitis, cancer and tuberculosis. A sensitive and selective ultra-high liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method was developed and validated for the simultaneous determination of five flavonoids (stelleranol, chamaechromone, neochamaejasmin A, chamaejasmine and isochamaejasmin) of S. chamaejasme L. in rat plasma. Chromatographic separation was accomplished on an Agilent Poroshell 120 EC-C 18 column (2.1 × 100 mm, 2.7 μm) with gradient elution at a flow rate of 0.4 mL/min and the total analysis time was 7 min. The analytes were detected using multiple reaction monitoring in positive ionization mode. The samples were prepared by liquid-liquid extraction with ethyl acetate. The UPLC-MS/MS method was validated for specificity, linearity, sensitivity, accuracy and precision, recovery, matrix effect and stability. The validated method exhibited good linearity (r ≥ 0.9956), and the lower limits of quantification ranged from 0.51 to 0.64 ng/mL for five flavonoids. The intra- and inter-day precision were both <10.2%, and the accuracy ranged from -11.79 to 9.21%. This method was successfully applied to a pharmacokinetic study of five flavonoids in rats after oral administration of ethyl acetate extract of S. chamaejasme L. Copyright © 2018 John Wiley & Sons, Ltd.
Pauwels, Jochen; D'Autry, Ward; Van den Bossche, Larissa; Dewever, Cédric; Forier, Michel; Vandenwaeyenberg, Stephanie; Wolfs, Kris; Hoogmartens, Jos; Van Schepdael, Ann; Adams, Erwin
2012-02-23
Capsaicinoids, salicylic acid, methyl and ethyl salicylate, glycol monosalicylate, camphor and l-menthol are widely used in topical formulations to relieve local pain. For each separate compound or simple mixtures, quantitative analysis methods are reported. However, for a mixture containing all above mentioned active compounds, no assay methods were found. Due to the differing physicochemical characteristics, two methods were developed and optimized simultaneously. The non-volatile capsaicinoids, salicylic acid and glycol monosalicylate were analyzed with liquid chromatography following liquid-liquid extraction, whereas the volatile compounds were analyzed with static headspace-gas chromatography. For the latter method, liquid paraffin was selected as compatible dilution solvent. The optimized methods were validated in terms of specificity, linearity, accuracy and precision in a range of 80% to 120% of the expected concentrations. For both methods, peaks were well separated without interference of other compounds. Linear relationships were demonstrated with R² values higher than 0.996 for all compounds. Accuracy was assessed by performing replicate recovery experiments with spiked blank samples. Mean recovery values were all between 98% and 102%. Precision was checked at three levels: system repeatability, method precision and intermediate precision. Both methods were found to be acceptably precise at all three levels. Finally, the method was successfully applied to the analysis of some real samples (cutaneous sticks). Copyright © 2011 Elsevier B.V. All rights reserved.
Pandya, Jui J; Sanyal, Mallika; Shrivastav, Pranav S
2017-09-01
A new, simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for simultaneous determination of an anthelmintic drug, albendazole, and its active metabolite albendazole, sulfoxide. Planar chromatographic separation was performed on aluminum-backed layer of silica gel 60G F 254 using a mixture of toluene-acetonitrile-glacial acetic acid (7.0:2.9:0.1, v/v/v) as the mobile phase. For quantitation, the separated spots were scanned densitometrically at 225 nm. The retention factors (R f ) obtained under the established conditions were 0.76 ± 0.01 and 0.50 ± 0.01 and the regression plots were linear (r 2 ≥ 0.9997) in the concentration ranges 50-350 and 100-700 ng/band for albendazole and albendazole sulfoxide, respectively. The method was validated for linearity, specificity, accuracy (recovery) and precision, repeatability, stability and robustness. The limit of detection and limit of quantitation found were 9.84 and 29.81 ng/band for albendazole and 21.60 and 65.45 ng/band for albendazole sulfoxide, respectively. For plasma samples, solid-phase extraction of analytes yielded mean extraction recoveries of 87.59 and 87.13% for albendazole and albendazole sulfoxide, respectively. The method was successfully applied for the analysis of albendazole in pharmaceutical formulations with accuracy ≥99.32%. Copyright © 2017 John Wiley & Sons, Ltd.
Abbasian Ardakani, Ali; Gharbali, Akbar; Mohammadi, Afshin
2015-01-01
The aim of this study was to evaluate computer aided diagnosis (CAD) system with texture analysis (TA) to improve radiologists' accuracy in identification of thyroid nodules as malignant or benign. A total of 70 cases (26 benign and 44 malignant) were analyzed in this study. We extracted up to 270 statistical texture features as a descriptor for each selected region of interests (ROIs) in three normalization schemes (default, 3s and 1%-99%). Then features by the lowest probability of classification error and average correlation coefficients (POE+ACC), and Fisher coefficient (Fisher) eliminated to 10 best and most effective features. These features were analyzed under standard and nonstandard states. For TA of the thyroid nodules, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA) were applied. First Nearest-Neighbour (1-NN) classifier was performed for the features resulting from PCA and LDA. NDA features were classified by artificial neural network (A-NN). Receiver operating characteristic (ROC) curve analysis was used for examining the performance of TA methods. The best results were driven in 1-99% normalization with features extracted by POE+ACC algorithm and analyzed by NDA with the area under the ROC curve ( Az) of 0.9722 which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Our results indicate that TA is a reliable method, can provide useful information help radiologist in detection and classification of benign and malignant thyroid nodules.
Heo, Seok; Yoo, Geum Joo; Choi, Ji Yeon; Park, Hyoung Joon; Park, Sung-Kwan; Baek, Sun Young
2016-11-01
A novel, stable, simple and specific ultra-performance liquid chromatography method with ultraviolet detection (205 nm) for the simultaneous analysis of 25 anti-hypertensive substances was developed. The method was validated according to the International Conference of Harmonisation guidelines with respect to linearity, accuracy, precision, limit of detection (LOD), limit of quantitation (LOQ) and stability. From the ultra-performance liquid chromatography results, we identified the LOD and LOQ of solid samples to be 0.20-1.00 and 0.60-3.00 μg ml -1 , respectively, while those of liquid samples were 0.30-1.20 and 0.90-3.60 μg ml -1 , respectively. The linearity exceeded 0.9999, and the intra- and inter-day precisions were 0.15-6.48% and 0.28-8.67%, respectively. The intra- and inter-day accuracies were 82.25-111.42% and 80.70-115.64%, respectively, and the stability was lower than 12.9% (relative standard deviation). This method was applied to the monitoring of 97 commercially available dietary supplements obtained in Korea, such as pills, soft capsules, hard capsules, liquids, powders and tablets. The proposed method is accurate, precise and of high quality, and can be used for the routine, reproducible analysis and control of 25 anti-hypertensive substances in various dietary supplements. The work presented herein may help to prevent incidents related to food adulteration and restrict the illegal food market.
Standardisation of DNA quantitation by image analysis: quality control of instrumentation.
Puech, M; Giroud, F
1999-05-01
DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.
Alhazmi, Hassan A.; Alnami, Ahmed M.; Arishi, Mohammed A. A.; Alameer, Raad K.; Al Bratty, Mohammed; Rehman, Zia ur; Javed, Sadique A.; Arbab, Ismail A.
2017-01-01
The aim of this study was to develop and validate a fast and simple reversed-phase HPLC method for simultaneous determination of four cardiovascular agents—atorvastatin, simvastatin, telmisartan and irbesartan in bulk drugs and tablet oral dosage forms. The chromatographic separation was accomplished by using Symmetry C18 column (75 mm × 4.6 mm; 3.5 μ) with a mobile phase consisting of ammonium acetate buffer (10 mM; pH 4.0) and acetonitrile in a ratio 40:60 v/v. Flow rate was maintained at 1 mL/min up to 3.5 min, and then suddenly changed to 2 mL/min till the end of the run (7.5 min). The data was acquired using ultraviolet detector monitored at 220 nm. The method was validated for linearity, precision, accuracy and specificity. The developed method has shown excellent linearity (R2 > 0.999) over the concentration range of 1–16 µg/mL. The limits of detection (LODs) and limits of quantification (LOQs) were in the range of 0.189–0.190 and 0.603–0.630 µg/mL, respectively. Inter-day and intra-day accuracy and precision data were recorded in the acceptable limits. The new method has successfully been applied for quantification of all four drugs in their tablet dosage forms with percent recovery within 100 ± 2%. PMID:29257120
Obrzut, Bogdan; Kusy, Maciej; Semczuk, Andrzej; Obrzut, Marzanna; Kluska, Jacek
2017-12-12
Computational intelligence methods, including non-linear classification algorithms, can be used in medical research and practice as a decision making tool. This study aimed to evaluate the usefulness of artificial intelligence models for 5-year overall survival prediction in patients with cervical cancer treated by radical hysterectomy. The data set was collected from 102 patients with cervical cancer FIGO stage IA2-IIB, that underwent primary surgical treatment. Twenty-three demographic, tumor-related parameters and selected perioperative data of each patient were collected. The simulations involved six computational intelligence methods: the probabilistic neural network (PNN), multilayer perceptron network, gene expression programming classifier, support vector machines algorithm, radial basis function neural network and k-Means algorithm. The prediction ability of the models was determined based on the accuracy, sensitivity, specificity, as well as the area under the receiver operating characteristic curve. The results of the computational intelligence methods were compared with the results of linear regression analysis as a reference model. The best results were obtained by the PNN model. This neural network provided very high prediction ability with an accuracy of 0.892 and sensitivity of 0.975. The area under the receiver operating characteristics curve of PNN was also high, 0.818. The outcomes obtained by other classifiers were markedly worse. The PNN model is an effective tool for predicting 5-year overall survival in cervical cancer patients treated with radical hysterectomy.
Siva Selva Kumar, M; Ramanathan, M
2016-02-01
A simple and sensitive ultra-performance liquid chromatography (UPLC) method has been developed and validated for simultaneous estimation of olanzapine (OLZ), risperidone (RIS) and 9-hydroxyrisperidone (9-OHRIS) in human plasma in vitro. The sample preparation was performed by simple liquid-liquid extraction technique. The analytes were chromatographed on a Waters Acquity H class UPLC system using isocratic mobile phase conditions at a flow rate of 0.3 mL/min and Acquity UPLC BEH shield RP18 column maintained at 40°C. Quantification was performed on a photodiode array detector set at 277 nm and clozapine was used as internal standard (IS). OLZ, RIS, 9-OHRIS and IS retention times were found to be 0.9, 1.4, .1.8 and 3.1 min, respectively, and the total run time was 4 min. The method was validated for selectivity, specificity, recovery, linearity, accuracy, precision and sample stability. The calibration curve was linear over the concentration range 1-100 ng/mL for OLZ, RIS and 9-OHRIS. Intra- and inter-day precisions for OLZ, RIS and 9-OHRIS were found to be good with the coefficient of variation <6.96%, and the accuracy ranging from 97.55 to 105.41%, in human plasma. The validated UPLC method was successfully applied to the pharmacokinetic study of RIS and 9-OHRIS in human plasma. Copyright © 2015 John Wiley & Sons, Ltd.
Ramanujam, N; Sivaselvakumar, M; Ramalingam, S
2017-11-01
A simple, sensitive and reproducible ultra-performance liquid chromatography (UPLC) method has been developed and validated for simultaneous estimation of polychlorinated biphenyl (PCB) 77 and PCB 180 in mouse plasma. The sample preparation was performed by simple liquid-liquid extraction technique. The analytes were chromatographed on a Waters Acquity H class UPLC system using isocratic mobile phase conditions at a flow rate of 0.3 mL/min and Acquity UPLC BEH shield RP 18 column maintained at 35°C. Quantification was performed on a photodiode array detector set at 215 nm and PCB 101 was used as internal standard (IS). PCB 77, PCB 180, and IS retention times were 2.6, 4.7 and 2.8 min, respectively, and the total run time was 6 min. The method was validated for specificity, selectivity, recovery, linearity, accuracy, precision and sample stability. The calibration curve was linear over the concentration range 10-3000 ng/mL for PCB 77 and PCB 180. Intra- and inter-day precisions for PCBs 77 and 180 were found to be good with CV <4.64%, and the accuracy ranged from 98.90 to 102.33% in mouse plasma. The validated UPLC method was successfully applied to the pharmacokinetic study of PCBs 77 and 180 in mouse plasma. Copyright © 2017 John Wiley & Sons, Ltd.
Linearization of the bradford protein assay.
Ernst, Orna; Zor, Tsaffrir
2010-04-12
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
Steps toward quantitative infrasound propagation modeling
NASA Astrophysics Data System (ADS)
Waxler, Roger; Assink, Jelle; Lalande, Jean-Marie; Velea, Doru
2016-04-01
Realistic propagation modeling requires propagation models capable of incorporating the relevant physical phenomena as well as sufficiently accurate atmospheric specifications. The wind speed and temperature gradients in the atmosphere provide multiple ducts in which low frequency sound, infrasound, can propagate efficiently. The winds in the atmosphere are quite variable, both temporally and spatially, causing the sound ducts to fluctuate. For ground to ground propagation the ducts can be borderline in that small perturbations can create or destroy a duct. In such cases the signal propagation is very sensitive to fluctuations in the wind, often producing highly dispersed signals. The accuracy of atmospheric specifications is constantly improving as sounding technology develops. There is, however, a disconnect between sound propagation and atmospheric specification in that atmospheric specifications are necessarily statistical in nature while sound propagates through a particular atmospheric state. In addition infrasonic signals can travel to great altitudes, on the order of 120 km, before refracting back to earth. At such altitudes the atmosphere becomes quite rare causing sound propagation to become highly non-linear and attenuating. Approaches to these problems will be presented.
Automatic detection and classification of artifacts in single-channel EEG.
Olund, Thomas; Duun-Henriksen, Jonas; Kjaer, Troels W; Sorensen, Helge B D
2014-01-01
Ambulatory EEG monitoring can provide medical doctors important diagnostic information, without hospitalizing the patient. These recordings are however more exposed to noise and artifacts compared to clinically recorded EEG. An automatic artifact detection and classification algorithm for single-channel EEG is proposed to help identifying these artifacts. Features are extracted from the EEG signal and wavelet subbands. Subsequently a selection algorithm is applied in order to identify the best discriminating features. A non-linear support vector machine is used to discriminate among different artifact classes using the selected features. Single-channel (Fp1-F7) EEG recordings are obtained from experiments with 12 healthy subjects performing artifact inducing movements. The dataset was used to construct and validate the model. Both subject-specific and generic implementation, are investigated. The detection algorithm yield an average sensitivity and specificity above 95% for both the subject-specific and generic models. The classification algorithm show a mean accuracy of 78 and 64% for the subject-specific and generic model, respectively. The classification model was additionally validated on a reference dataset with similar results.
Estimating thermal diffusivity and specific heat from needle probe thermal conductivity data
Waite, W.F.; Gilbert, L.Y.; Winters, W.J.; Mason, D.H.
2006-01-01
Thermal diffusivity and specific heat can be estimated from thermal conductivity measurements made using a standard needle probe and a suitably high data acquisition rate. Thermal properties are calculated from the measured temperature change in a sample subjected to heating by a needle probe. Accurate thermal conductivity measurements are obtained from a linear fit to many tens or hundreds of temperature change data points. In contrast, thermal diffusivity calculations require a nonlinear fit to the measured temperature change occurring in the first few tenths of a second of the measurement, resulting in a lower accuracy than that obtained for thermal conductivity. Specific heat is calculated from the ratio of thermal conductivity to diffusivity, and thus can have an uncertainty no better than that of the diffusivity estimate. Our thermal conductivity measurements of ice Ih and of tetrahydrofuran (THF) hydrate, made using a 1.6 mm outer diameter needle probe and a data acquisition rate of 18.2 pointss, agree with published results. Our thermal diffusivity and specific heat results reproduce published results within 25% for ice Ih and 3% for THF hydrate. ?? 2006 American Institute of Physics.
Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario
NASA Astrophysics Data System (ADS)
Moscadelli, M.; Diani, M.; Corsini, G.
2017-10-01
In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.
Optimization and Validation of ELISA for Pre-Clinical Trials of Influenza Vaccine.
Mitic, K; Muhandes, L; Minic, R; Petrusic, V; Zivkovic, I
2016-01-01
Testing of every new vaccine involves investigation of its immunogenicity, which is based on monitoring its ability to induce specific antibodies in animals. The fastest and most sensitive method used for this purpose is enzyme-linked immunosorbent assay (ELISA). However, commercial ELISA kits with whole influenza virus antigens are not available on the market, and it is therefore essential to establish an adequate assay for testing influenza virusspecific antibodies. We developed ELISA with whole influenza virus strains for the season 2011/2012 as antigens and validated it by checking its specificity, accuracy, linearity, range, precision, and sensitivity. The results show that we developed high-quality ELISA that can be used to test immunogenicity of newly produced seasonal or pandemic vaccines in mice. The pre-existence of validated ELISA enables shortening the time from the process of vaccine production to its use in patients, which is particularly important in the case of a pandemic.
Yehia, Ali M
2013-05-15
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.
2013-05-01
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Abdel-Aleem, Eglal A; Hegazy, Maha A; Sayed, Nour W; Abdelkawy, M; Abdelfatah, Rehab M
2015-02-05
This work is concerned with development and validation of three simple, specific, accurate and precise spectrophotometric methods for determination of flumethasone pivalate (FP) and clioquinol (CL) in their binary mixture and ear drops. Method A is a ratio subtraction spectrophotometric one (RSM). Method B is a ratio difference spectrophotometric one (RDSM), while method C is a mean center spectrophotometric one (MCR). The calibration curves are linear over the concentration range of 3-45 μg/mL for FP, and 2-25 μg/mL for CL. The specificity of the developed methods was assessed by analyzing different laboratory prepared mixtures of the FP and CL. The three methods were validated as per ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Copyright © 2014 Elsevier B.V. All rights reserved.
Howley, Donna; Howley, Peter; Oxenham, Marc F
2018-06-01
Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.
Trimboli, Francesca; Morittu, Valeria Maria; Cicino, Caterina; Palmieri, Camillo; Britti, Domenico
2017-10-13
The substitution of ewe milk with more economic cow milk is a common fraud. Here we present a capillary electrophoresis method for the quantification of ewe milk in ovine/bovine milk mixtures, which allows for the rapid and inexpensive recognition of ewe milk adulteration with cow milk. We utilized a routine CE method for human blood and urine proteins analysis, which fulfilled the separation of skimmed milk proteins in alkaline buffer. Under this condition, ovine and bovine milk exhibited a recognizable and distinct CE protein profiles, with a specific ewe peak showing a reproducible migration zone in ovine/bovine mixtures. Based on ewe specific CE peak, we developed a method for ewe milk quantification in ovine/bovine skimmed milk mixtures, which showed good linearity, precision and accuracy, and a minimum amount of detectable fraudulent cow milk equal to 5%. Copyright © 2017 Elsevier B.V. All rights reserved.
Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto
2018-05-09
We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.
Orlando Júnior, Nilton; de Souza Leão, Marcos George; de Oliveira, Nelson Henrique Carvalho
2015-01-01
Objectives To ascertain the sensitivity, specificity, accuracy and concordance of the physical examination (PE) and magnetic resonance imaging (MRI) in comparison with arthroscopy, in diagnosing knee injuries. Methods Prospective study on 72 patients, with evaluation and comparison of PE, MRI and arthroscopic findings, to determine the concordance, accuracy, sensitivity and specificity. Results PE showed sensitivity of 75.00%, specificity of 62.50% and accuracy of 69.44% for medial meniscal (MM) lesions, while it showed sensitivity of 47.82%, specificity of 93.87% and accuracy of 79.16% for lateral meniscal (LM) lesions. For anterior cruciate ligament (ACL) injuries, PE showed sensitivity of 88.67%, specificity of 94.73% and accuracy of 90.27%. For MM lesions, MRI showed sensitivity of 92.50%, specificity of 62.50% and accuracy of 69.44%, while for LM injuries, it showed sensitivity of 65.00%, specificity of 88.46% and accuracy of 81.94%. For ACL injuries, MRI showed sensitivity of 86.79%, specificity of 73.68% and accuracy of 83.33%. For ACL injuries, the best concordance was with PE, while for MM and LM lesions, it was with MRI (p < 0.001). Conclusions Meniscal and ligament injuries can be diagnosed through careful physical examination, while requests for MRI are reserved for complex or doubtful cases. PE and MRI used together have high sensitivity for ACL and MM lesions, while for LM lesions the specificity is higher. Level of evidence II – Development of diagnostic criteria on consecutive patients (with universally applied reference “gold” standard). PMID:27218085
Hypersensitive Detection and Quantitation of BoNT/A by IgY Antibody against Substrate Linear-Peptide
Li, Tao; Liu, Hao; Cai, Kun; Tian, Maoren; Wang, Qin; Shi, Jing; Gao, Xiang; Wang, Hui
2013-01-01
Botulinum neurotoxin A (BoNT/A), the most acutely poisonous substance to humans known, cleave its SNAP-25 substrate with high specificity. Based on the endopeptidase activity, different methods have been developed to detect BoNT/A, but most lack ideal reproducibility or sensitivity, or suffer from long-term or unwanted interferences. In this study, we developed a simple method to detect and quantitate trace amounts of botulinum neurotoxin A using the IgY antibody against a linear-peptide substrate. The effects of reaction buffer, time, and temperature were analyzed and optimized. When the optimized assay was used to detect BoNT/A, the limit of detection of the assay was 0.01 mouse LD50 (0.04 pg), and the limit of quantitation was 0.12 mouse LD50/ml (0.48 pg). The findings also showed favorable specificity of detecting BoNT/A. When used to detect BoNT/A in milk or human serum, the proposed assay exhibited good quantitative accuracy (88% < recovery < 111%; inter- and intra-assay CVs < 18%). This method of detection took less than 3 h to complete, indicating that it can be a valuable method of detecting BoNT/A in food or clinical diagnosis. PMID:23555605
Li, Tao; Liu, Hao; Cai, Kun; Tian, Maoren; Wang, Qin; Shi, Jing; Gao, Xiang; Wang, Hui
2013-01-01
Botulinum neurotoxin A (BoNT/A), the most acutely poisonous substance to humans known, cleave its SNAP-25 substrate with high specificity. Based on the endopeptidase activity, different methods have been developed to detect BoNT/A, but most lack ideal reproducibility or sensitivity, or suffer from long-term or unwanted interferences. In this study, we developed a simple method to detect and quantitate trace amounts of botulinum neurotoxin A using the IgY antibody against a linear-peptide substrate. The effects of reaction buffer, time, and temperature were analyzed and optimized. When the optimized assay was used to detect BoNT/A, the limit of detection of the assay was 0.01 mouse LD50 (0.04 pg), and the limit of quantitation was 0.12 mouse LD50/ml (0.48 pg). The findings also showed favorable specificity of detecting BoNT/A. When used to detect BoNT/A in milk or human serum, the proposed assay exhibited good quantitative accuracy (88% < recovery < 111%; inter- and intra-assay CVs < 18%). This method of detection took less than 3 h to complete, indicating that it can be a valuable method of detecting BoNT/A in food or clinical diagnosis.
Inci, Ercan; Ekizoglu, Oguzhan; Turkay, Rustu; Aksoy, Sema; Can, Ismail Ozgur; Solmaz, Dilek; Sayin, Ibrahim
2016-10-01
Morphometric analysis of the mandibular ramus (MR) provides highly accurate data to discriminate sex. The objective of this study was to demonstrate the utility and accuracy of MR morphometric analysis for sex identification in a Turkish population.Four hundred fifteen Turkish patients (18-60 y; 201 male and 214 female) who had previously had multidetector computed tomography scans of the cranium were included in the study. Multidetector computed tomography images were obtained using three-dimensional reconstructions and a volume-rendering technique, and 8 linear and 3 angular values were measured. Univariate, bivariate, and multivariate discriminant analyses were performed, and the accuracy rates for determining sex were calculated.Mandibular ramus values produced high accuracy rates of 51% to 95.6%. Upper ramus vertical height had the highest rate at 95.6%, and bivariate analysis showed 89.7% to 98.6% accuracy rates with the highest ratios of mandibular flexure upper border and maximum ramus breadth. Stepwise discrimination analysis gave a 99% accuracy rate for all MR variables.Our study showed that the MR, in particular morphometric measures of the upper part of the ramus, can provide valuable data to determine sex in a Turkish population. The method combines both anthropological and radiologic studies.
A note on the accuracy of spectral method applied to nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Wong, Peter S.
1994-01-01
Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
We present a pedagogical systematic investigation of the accuracy of Eulerian and Lagrangian perturbation theories of large-scale structure. We show that significant differences exist between them especially when trying to model the Baryon Acoustic Oscillations (BAO). We find that the best available model of the BAO in real space is the Zel'dovich Approximation (ZA), giving an accuracy of ∼<3% at redshift of z = 0 in modelling the matter 2-pt function around the acoustic peak. All corrections to the ZA around the BAO scale are perfectly perturbative in real space. Any attempt to achieve better precision requires calibrating the theorymore » to simulations because of the need to renormalize those corrections. In contrast, theories which do not fully preserve the ZA as their solution, receive O(1) corrections around the acoustic peak in real space at z = 0, and are thus of suspicious convergence at low redshift around the BAO. As an example, we find that a similar accuracy of 3% for the acoustic peak is achieved by Eulerian Standard Perturbation Theory (SPT) at linear order only at z ≈ 4. Thus even when SPT is perturbative, one needs to include loop corrections for z∼<4 in real space. In Fourier space, all models perform similarly, and are controlled by the overdensity amplitude, thus recovering standard results. However, that comes at a price. Real space cleanly separates the BAO signal from non-linear dynamics. In contrast, Fourier space mixes signal from short mildly non-linear scales with the linear signal from the BAO to the level that non-linear contributions from short scales dominate. Therefore, one has little hope in constructing a systematic theory for the BAO in Fourier space.« less
The Accuracy of Shock Capturing in Two Spatial Dimensions
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Casper, Jay H.
1997-01-01
An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Joe H.; University of Melbourne, Victoria; Lim Joon, Daryl
2015-06-01
Purpose: The purpose of this study was to compare the accuracy of [{sup 11}C]choline positron emission tomography (CHOL-PET) with that of the combination of T2-weighted and diffusion-weighted (T2W/DW) magnetic resonance imaging (MRI) for delineating malignant intraprostatic lesions (IPLs) for guiding focal therapies and to investigate factors predicting the accuracy of CHOL-PET. Methods and Materials: This study included 21 patients who underwent CHOL-PET and T2W/DW MRI prior to radical prostatectomy. Two observers manually delineated IPL contours for each scan, and automatic IPL contours were generated on CHOL-PET based on varying proportions of the maximum standardized uptake value (SUV). IPLs identified onmore » prostatectomy specimens defined reference standard contours. The imaging-based contours were compared with the reference standard contours using Dice similarity coefficient (DSC), and sensitivity and specificity values. Factors that could potentially predict the DSC of the best contouring method were analyzed using linear models. Results: The best automatic contouring method, 60% of the maximum SUV (SUV{sub 60}) , had similar correlations (DSC: 0.59) with the manual PET contours (DSC: 0.52, P=.127) and significantly better correlations than the manual MRI contours (DSC: 0.37, P<.001). The sensitivity and specificity values were 72% and 71% for SUV{sub 60}; 53% and 86% for PET manual contouring; and 28% and 92% for MRI manual contouring. The tumor volume and transition zone pattern could independently predict the accuracy of CHOL-PET. Conclusions: CHOL-PET is superior to the combination of T2W/DW MRI for delineating IPLs. The accuracy of CHOL-PET is insufficient for gland-sparing focal therapies but may be accurate enough for focal boost therapies. The transition zone pattern is a new classification that may predict how well CHOL-PET delineates IPLs.« less
Bhimarao; Bhat, Venkataramana; Gowda, Puttanna VN
2015-01-01
Background The high incidence of IUGR and its low recognition lead to increasing perinatal morbidity and mortality for which prediction of IUGR with timely management decisions is of paramount importance. Many studies have compared the efficacy of several gestational age independent parameters and found that TCD/AC is a better predictor of asymmetric IUGR. Aim To compare the accuracy of transcerebellar diameter/abdominal circumference with head circumference/abdominal circumference in predicting asymmetric intrauterine growth retardation after 20 weeks of gestation. Materials and Methods The prospective study was conducted over a period of one year on 50 clinically suspected IUGR pregnancies who were evaluated with 3.5 MHz frequency ultrasound scanner by a single sonologist. BPD, HC, AC and FL along with TCD were measured for assessing the sonological gestational age. Two morphometric ratios- TCD/AC and HC/AC were calculated. Estimated fetal weight was calculated for all these pregnancies and its percentile was determined. Statistical Methods The TCD/AC and HC/AC ratios were correlated with advancing gestational age to know if these were related to GA. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and diagnostic accuracy (DA) for TCD/AC and HC/AC ratios in evaluating IUGR fetuses were calculated. Results In the present study, linear relation of TCD and HC in IUGR fetuses with gestation was noted. The sensitivity, specificity, PPV, NPV & DA were 88%, 93.5%, 77.1%, 96.3% & 92.4% respectively for TCD/AC ratio versus 84%, 92%, 72.4%, 95.8% & 90.4% respectively for HC/AC ratio in predicting IUGR. Conclusion Both ratios were gestational age independent and can be used in detecting IUGR with good diagnostic accuracy. However, TCD/AC ratio had a better diagnostic validity and accuracy compared to HC/AC ratio in predicting asymmetric IUGR. PMID:26557588
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Weavers, Paul T; Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Tryggestad, Erik J; Gunter, Jeffrey L; McGee, Kiaran P; Litwiller, Daniel V; Hwang, Ken-Pin; Bernstein, Matt A
2017-05-01
Spatial position accuracy in magnetic resonance imaging (MRI) is an important concern for a variety of applications, including radiation therapy planning, surgical planning, and longitudinal studies of morphologic changes to study neurodegenerative diseases. Spatial accuracy is strongly influenced by gradient linearity. This work presents a method for characterizing the gradient non-linearity fields on a per-system basis, and using this information to provide improved and higher-order (9th vs. 5th) spherical harmonic coefficients for better spatial accuracy in MRI. A large fiducial phantom containing 5229 water-filled spheres in a grid pattern is scanned with the MR system, and the positions all the fiducials are measured and compared to the corresponding ground truth fiducial positions as reported from a computed tomography (CT) scan of the object. Systematic errors from off-resonance (i.e., B0) effects are minimized with the use of increased receiver bandwidth (±125kHz) and two acquisitions with reversed readout gradient polarity. The spherical harmonic coefficients are estimated using an iterative process, and can be subsequently used to correct for gradient non-linearity. Test-retest stability was assessed with five repeated measurements on a single scanner, and cross-scanner variation on four different, identically-configured 3T wide-bore systems. A decrease in the root-mean-square error (RMSE) over a 50cm diameter spherical volume from 1.80mm to 0.77mm is reported here in the case of replacing the vendor's standard 5th order spherical harmonic coefficients with custom fitted 9th order coefficients, and from 1.5mm to 1mm by extending custom fitted 5th order correction to the 9th order. Minimum RMSE varied between scanners, but was stable with repeated measurements in the same scanner. The results suggest that the proposed methods may be used on a per-system basis to more accurately calibrate MR gradient non-linearity coefficients when compared to vendor standard corrections. Copyright © 2016 Elsevier Inc. All rights reserved.
A Revolute Joint With Linear Load-Displacement Response for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Lake, Mark S.; Warren, Peter A.; Peterson, Lee D.
1996-01-01
NASA Langley Research center is developing key structures and mechanisms technologies for micron-accuracy, in-space deployment of future space instruments. Achieving micron-accuracy deployment requires significant advancements in deployment mechanism design such as the revolute joint presented herein. The joint presented herein exhibits a load-cycling response that is essentially linear with less than two percent hysteresis, and the joint rotates with less than one in.-oz. of resistance. A prototype reflector metering truss incorporating the joint exhibits only a few microns of kinematic error under repeated deployment and impulse loading. No other mechanically deployable structure found in literature has been demonstrated to be this kinematically accurate.
Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit
NASA Astrophysics Data System (ADS)
Sikharulidze, Yu. G.; Korchagin, A. N.
2002-01-01
The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.
Age estimation in northern Chinese children by measurement of open apices in tooth roots.
Guo, Yu-Cheng; Yan, Chun-Xia; Lin, Xing-Wei; Zhou, Hong; Li, Ju-Ping; Pan, Feng; Zhang, Zhi-Yong; Wei, Lai; Tang, Zheng; Chen, Teng
2015-01-01
The aim of this study was to assess the accuracy of Cameriere's methods on dental age estimation in the northern Chinese population. A sample of orthopantomographs of 785 healthy children (397 girls and 388 boys) aged between 5 and 15 years was collected. The seven left permanent mandibular teeth were evaluated with Cameriere's method. The sample was split into a training set to develop a Chinese-specific prediction formula and a test set to validate this novel developed formula. Following the training dataset study, the variables gender (g), x 3 (canine teeth), x 4 (first premolar), x 7 (second molar), N 0, and the first-order interaction between s and N 0 contributed significantly to the fit, yielding the following linear regression formula: Age = 10.202 + 0.826 g - 4.068x 3 - 1.536x 4 - 1.959x 7 + 0.536 N 0 - 0.219 s [Symbol: see text] N 0, where g is a variable, 1 for boys and 0 for girls. The equation explained 91.2 % (R (2) = 0.912) of the total deviance. By analyzing the test dataset, the accuracy of the European formula and Chinese formula was determined by the difference between the estimated dental age (DA) and chronological age (CA). The European formula verified on the collected Chinese children underestimated chronological age with a mean difference of around -0.23 year, while the Chinese formula underestimated the chronological age with a mean difference of -0.04 year. Significant differences in mean differences in years (DA - CA) and absolute difference (AD) between the Chinese-specific prediction formula and Cameriere's European formula were observed. In conclusion, a Chinese-specific prediction formula based on a large Chinese reference sample could ameliorate the age prediction accuracy in the age group of children.
Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina
2014-01-01
A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform + acetone + ammonia (25%) in volume composition 8 : 2 : 0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75 ÷ 12.50 μg·spot−1, and from 1.00 ÷ 2.50 μg·spot−1 for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198 μg·spot−1 and LOD is 0.066 μg·spot−1. LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090 μg·spot−1, respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia. PMID:24526880
Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina
2014-01-01
A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform+acetone+ammonia (25%) in volume composition 8:2:0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75÷12.50 μg·spot(-1), and from 1.00÷2.50 μg·spot(-1) for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198 μg·spot(-1) and LOD is 0.066 μg·spot(-1). LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090 μg·spot(-1), respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2013-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2014-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis. PMID:23702553
Yokoo, Takeshi; Bydder, Mark; Hamilton, Gavin; Middleton, Michael S.; Gamst, Anthony C.; Wolfson, Tanya; Hassanein, Tarek; Patton, Heather M.; Lavine, Joel E.; Schwimmer, Jeffrey B.; Sirlin, Claude B.
2009-01-01
Purpose: To assess the accuracy of four fat quantification methods at low-flip-angle multiecho gradient-recalled-echo (GRE) magnetic resonance (MR) imaging in nonalcoholic fatty liver disease (NAFLD) by using MR spectroscopy as the reference standard. Materials and Methods: In this institutional review board–approved, HIPAA-compliant prospective study, 110 subjects (29 with biopsy-confirmed NAFLD, 50 overweight and at risk for NAFLD, and 31 healthy volunteers) (mean age, 32.6 years ± 15.6 [standard deviation]; range, 8–66 years) gave informed consent and underwent MR spectroscopy and GRE MR imaging of the liver. Spectroscopy involved a long repetition time (to suppress T1 effects) and multiple echo times (to estimate T2 effects); the reference fat fraction (FF) was calculated from T2-corrected fat and water spectral peak areas. Imaging involved a low flip angle (to suppress T1 effects) and multiple echo times (to estimate T2* effects); imaging FF was calculated by using four analysis methods of progressive complexity: dual echo, triple echo, multiecho, and multiinterference. All methods except dual echo corrected for T2* effects. The multiinterference method corrected for multiple spectral interference effects of fat. For each method, the accuracy for diagnosis of fatty liver, as defined with a spectroscopic threshold, was assessed by estimating sensitivity and specificity; fat-grading accuracy was assessed by comparing imaging and spectroscopic FF values by using linear regression. Results: Dual-echo, triple-echo, multiecho, and multiinterference methods had a sensitivity of 0.817, 0.967, 0.950, and 0.983 and a specificity of 1.000, 0.880, 1.000, and 0.880, respectively. On the basis of regression slope and intercept, the multiinterference (slope, 0.98; intercept, 0.91%) method had high fat-grading accuracy without statistically significant error (P > .05). Dual-echo (slope, 0.98; intercept, −2.90%), triple-echo (slope, 0.94; intercept, 1.42%), and multiecho (slope, 0.85; intercept, −0.15%) methods had statistically significant error (P < .05). Conclusion: Relaxation- and interference-corrected fat quantification at low-flip-angle multiecho GRE MR imaging provides high diagnostic and fat-grading accuracy in NAFLD. © RSNA, 2009 PMID:19221054
SU-E-T-120: Dosimetric Characteristics Study of NanoDotâ,,¢ for In-Vivo Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, A; Wasaye, A; Gohar, R
Purpose: The purpose of the study was to analyze the dosimetric characteristics (energy dependence, reproducibility and dose linearity) of nanoDot™ optically stimulated luminescence dosimeters (OSLDs) and validate their potential use during in-vivo dosimetry, specifically TBI. The manufacturer stated accuracy is ±10% for standard nanoDot™. Methods: At AKUH, the InLight microStar OSL dosimetry system for patient in-vivo dosimetry is in use since 2012. Twenty-five standard nanoDot™ were used in the analysis. Sensitivity and reproducibility was tested in the first part with 6MV and 18 MV Varian x-ray beams. Each OSLD was irradiated to 100cGy dose at nominal SSD (100 cm). Allmore » the OSLDs were read 3 times for average reading. Dose linearity and calibration were also performed with same beams in common clinical dose range of 0 - 500 cGy. In addition, verification of TBI absolute dose at extended SSD (500cm) was also performed. Results: The reproducibility observed with the OSLD was better than the manufacturer stated limits. Measured doses vary less than ±2% in 19(76%) OSLDs, whereas less than ±3% in 6(24%) OSLDs. Their sensitivity was approximately 525 counts per cGy. Better agreement was observed between measurements, with a standard deviation of 1.8%. A linear dose response was observed with OSLDs for both 6 and 18MV beams in 0 - 500 cGy dose range. TBI measured doses at 500 cm SSD were also confirmed to be within ±0.5% and ±1.3% of the ion chamber measured doses for 6 and 18MV beams respectively. Conclusion: The dosimetric results demonstrate that nanoDot™ can be potentially used for in-vivo dosimetry verification in various clinical situations, with a high degree of accuracy and precision. In addition OSLDs exhibit better dose reproducibility with standard deviation of 1.8%. There was no significant difference in their response to 6 and 18MV beams. The dose response was also linear.« less
Vacondio, Federica; Silva, Claudia; Fioni, Alessandro; Mor, Marco; Rivara, Mirko; Bordi, Fabrizio; Flammini, Lisa; Ballabeni, Vigilio; Barocelli, Elisabetta
2008-01-07
A rapid, simple and sensitive liquid chromatography-mass spectrometry (LC-MS) method was developed and validated for the determination of the imidazole H(3) antagonist ROS203 in rat plasma, using the superior homologue ROS287 as internal standard. Analyses were performed on an Agilent 1100 Series HPLC system employing a Supelco Ascentis C(18) column and isocratic elution with acetonitrile-10mM ammonium acetate buffer pH 4.0 (30:70, v/v) at a flow rate of 0.25 mL/min. An Applied Biosystems/MDS Sciex 150-EX single quadrupole mass spectrometer, equipped with an electrospray ionization interface was employed, operating in the positive ion mode. Plasma samples were deproteinized with acetonitrile (1:2), evaporated under nitrogen stream, reconstituted in the mobile phase and 5 microL were injected into the system. The retention times of ROS203 and IS were 2.20 and 2.90 min, respectively. Calibration curves in spiked plasma were linear over the concentration range of 2610-2.61 ng/mL with determination coefficients >0.99. The lower limit of quantification (LLOQ) was 2.61 ng/mL. The accuracy of the method was within 15%. Intra- and inter-day relative standard deviations were less or equal to 9.50% or 7.19%, respectively. The applicability of the LC-MS method was tested employing plasma samples obtained after i.p. administration of ROS203 to female Wistar rats to support a behavioral in vivo study. The specificity of the method was confirmed by the absence of interferences from endogenous substances. The reported method can provide the necessary sensitivity, linearity, precision, accuracy and specificity to allow the determination of ROS203 in rat plasma samples to support further pharmacokinetic assays.
Specificity, Privacy, and Degeneracy in the CD4 T Cell Receptor Repertoire Following Immunization
Sun, Yuxin; Best, Katharine; Cinelli, Mattia; Heather, James M.; Reich-Zeliger, Shlomit; Shifrut, Eric; Friedman, Nir; Shawe-Taylor, John; Chain, Benny
2017-01-01
T cells recognize antigen using a large and diverse set of antigen-specific receptors created by a complex process of imprecise somatic cell gene rearrangements. In response to antigen-/receptor-binding-specific T cells then divide to form memory and effector populations. We apply high-throughput sequencing to investigate the global changes in T cell receptor sequences following immunization with ovalbumin (OVA) and adjuvant, to understand how adaptive immunity achieves specificity. Each immunized mouse contained a predominantly private but related set of expanded CDR3β sequences. We used machine learning to identify common patterns which distinguished repertoires from mice immunized with adjuvant with and without OVA. The CDR3β sequences were deconstructed into sets of overlapping contiguous amino acid triplets. The frequencies of these motifs were used to train the linear programming boosting (LPBoost) algorithm LPBoost to classify between TCR repertoires. LPBoost could distinguish between the two classes of repertoire with accuracies above 80%, using a small subset of triplet sequences present at defined positions along the CDR3. The results suggest a model in which such motifs confer degenerate antigen specificity in the context of a highly diverse and largely private set of T cell receptors. PMID:28450864
Tyson-Parry, Maree M; Sailah, Jessica; Boyes, Mark E; Badcock, Nicholas A
2015-10-01
This research investigated the relationship between the attentional blink (AB) and reading in typical adults. The AB is a deficit in the processing of the second of two rapidly presented targets when it occurs in close temporal proximity to the first target. Specifically, this experiment examined whether the AB was related to both phonological and sight-word reading abilities, and whether the relationship was mediated by accuracy on a single-target rapid serial visual processing task (single-target accuracy). Undergraduate university students completed a battery of tests measuring reading ability, non-verbal intelligence, and rapid automatised naming, in addition to rapid serial visual presentation tasks in which they were required to identify either two (AB task) or one (single target task) target/s (outlined shapes: circle, square, diamond, cross, and triangle) in a stream of random-dot distractors. The duration of the AB was related to phonological reading (n=41, β=-0.43): participants who exhibited longer ABs had poorer phonemic decoding skills. The AB was not related to sight-word reading. Single-target accuracy did not mediate the relationship between the AB and reading, but was significantly related to AB depth (non-linear fit, R(2)=.50): depth reflects the maximal cost in T2 reporting accuracy in the AB. The differential relationship between the AB and phonological versus sight-word reading implicates common resources used for phonemic decoding and target consolidation, which may be involved in cognitive control. The relationship between single-target accuracy and the AB is discussed in terms of cognitive preparation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Amuzu-Aweh, E N; Bijma, P; Kinghorn, B P; Vereijken, A; Visscher, J; van Arendonk, J Am; Bovenhuis, H
2013-12-01
Prediction of heterosis has a long history with mixed success, partly due to low numbers of genetic markers and/or small data sets. We investigated the prediction of heterosis for egg number, egg weight and survival days in domestic white Leghorns, using ∼400 000 individuals from 47 crosses and allele frequencies on ∼53 000 genome-wide single nucleotide polymorphisms (SNPs). When heterosis is due to dominance, and dominance effects are independent of allele frequencies, heterosis is proportional to the squared difference in allele frequency (SDAF) between parental pure lines (not necessarily homozygous). Under these assumptions, a linear model including regression on SDAF partitions crossbred phenotypes into pure-line values and heterosis, even without pure-line phenotypes. We therefore used models where phenotypes of crossbreds were regressed on the SDAF between parental lines. Accuracy of prediction was determined using leave-one-out cross-validation. SDAF predicted heterosis for egg number and weight with an accuracy of ∼0.5, but did not predict heterosis for survival days. Heterosis predictions allowed preselection of pure lines before field-testing, saving ∼50% of field-testing cost with only 4% loss in heterosis. Accuracies from cross-validation were lower than from the model-fit, suggesting that accuracies previously reported in literature are overestimated. Cross-validation also indicated that dominance cannot fully explain heterosis. Nevertheless, the dominance model had considerable accuracy, clearly greater than that of a general/specific combining ability model. This work also showed that heterosis can be modelled even when pure-line phenotypes are unavailable. We concluded that SDAF is a useful predictor of heterosis in commercial layer breeding.
Quantifying circular RNA expression from RNA-seq data using model-based framework.
Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun
2017-07-15
Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat
2018-07-01
In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.
NASA Astrophysics Data System (ADS)
Namani, Ravi
Mechanical properties are essential for understanding diseases that afflict various soft tissues, such as osteoarthritic cartilage and hypertension which alters cardiovascular arteries. Although the linear elastic modulus is routinely measured for hard materials, standard methods are not available for extracting the nonlinear elastic, linear elastic and time-dependent properties of soft tissues. Consequently, the focus of this work is to develop indentation methods for soft biological tissues; since analytical solutions are not available for the general context, finite element simulations are used. First, parametric studies of finite indentation of hyperelastic layers are performed to examine if indentation has the potential to identify nonlinear elastic behavior. To answer this, spherical, flat-ended conical and cylindrical tips are examined and the influence of thickness is exploited. Also the influence of the specimen/substrate boundary condition (slip or non-slip) is clarified. Second, a new inverse method---the hyperelastic extraction algorithm (HPE)---was developed to extract two nonlinear elastic parameters from the indentation force-depth data, which is the basic measurement in an indentation test. The accuracy of the extracted parameters and the influence of noise in measurements on this accuracy were obtained. This showed that the standard Berkovitch tip could only extract one parameter with sufficient accuracy, since the indentation force-depth curve has limited sensitivity to both nonlinear elastic parameters. Third, indentation methods for testing tissues from small animals were explored. New methods for flat-ended conical tips are derived. These account for practical test issues like the difficulty in locating the surface or soft specimens. Also, finite element simulations are explored to elucidate the influence of specimen curvature on the indentation force-depth curve. Fourth, the influence of inhomogeneity and material anisotropy on the extracted "average" linear elastic modulus was studied. The focus here is on murine tibial cartilage, since recent experiments have shown that the modulus measured by a 15 mum tip is considerably larger than that obtained from a 90 mum tip. It is shown that a depth-dependent modulus could give rise to such a size effect. Lastly, parametric studies were performed within the small strain setting to understand the influence of permeability and viscoelastic properties on the indentation stress-relaxation response. The focus here is on cartilage, and specific test protocols (single-step vs. multi-step stress relaxation) are explored. An inverse algorithm was developed to extract the poroviscoelastic parameters. A sensitivity study using this algorithm shows that the instantaneous elastic modulus (which is a measure of the viscous relaxation) can be extracted with very good accuracy, but the permeability and long-time relaxation constant cannot be extracted with good accuracy. The thesis concludes with implications of these studies. The potential and limitations of indentation tests for studying cartilage and other soft tissues is discussed.
Research on the error model of airborne celestial/inertial integrated navigation system
NASA Astrophysics Data System (ADS)
Zheng, Xiaoqiang; Deng, Xiaoguo; Yang, Xiaoxu; Dong, Qiang
2015-02-01
Celestial navigation subsystem of airborne celestial/inertial integrated navigation system periodically correct the positioning error and heading drift of the inertial navigation system, by which the inertial navigation system can greatly improve the accuracy of long-endurance navigation. Thus the navigation accuracy of airborne celestial navigation subsystem directly decides the accuracy of the integrated navigation system if it works for long time. By building the mathematical model of the airborne celestial navigation system based on the inertial navigation system, using the method of linear coordinate transformation, we establish the error transfer equation for the positioning algorithm of airborne celestial system. Based on these we built the positioning error model of the celestial navigation. And then, based on the positioning error model we analyze and simulate the positioning error which are caused by the error of the star tracking platform with the MATLAB software. Finally, the positioning error model is verified by the information of the star obtained from the optical measurement device in range and the device whose location are known. The analysis and simulation results show that the level accuracy and north accuracy of tracking platform are important factors that limit airborne celestial navigation systems to improve the positioning accuracy, and the positioning error have an approximate linear relationship with the level error and north error of tracking platform. The error of the verification results are in 1000m, which shows that the model is correct.
Wen, Ning; Kim, Joshua; Doemer, Anthony; Glide-Hurst, Carri; Chetty, Indrin J; Liu, Chang; Laugeman, Eric; Xhaferllari, Ilma; Kumarasiri, Akila; Victoria, James; Bellon, Maria; Kalkanis, Steve; Siddiqui, M Salim; Movsas, Benjamin
2018-06-01
The purpose of this study was to investigate the systematic localization accuracy, treatment planning capability, and delivery accuracy of an integrated magnetic resonance imaging guided Linear Accelerator (MR-Linac) platform for stereotactic radiosurgery. The phantom for the end-to-end test comprises three different compartments: a rectangular MR/CT target phantom, a Winston-Lutz cube, and a rectangular MR/CT isocenter phantom. Hidden target tests were performed at gantry angles of 0, 90, 180, and 270 degrees to quantify the systematic accuracy. Five patient plans with a total of eleven lesions were used to evaluate the dosimetric accuracy. Single-isocenter IMRT treatment plans using 10-15 coplanar beams were generated to treat the multiple metastases. The end-to-end localization accuracy of the system was 1.0 ± 0.1 mm. The conformity index, homogeneity index and gradient index of the plans were 1.26 ± 0.22, 1.22 ± 0.10, and 5.38 ± 1.44, respectively. The average absolute point dose difference between measured and calculated dose was 1.64 ± 1.90%, and the mean percentage of points passing the 3%/1 mm gamma criteria was 96.87%. Our experience demonstrates that excellent plan quality and delivery accuracy was achievable on the MR-Linac for treating multiple brain metastases with a single isocenter. Copyright © 2018 Elsevier B.V. All rights reserved.
Linear and nonlinear spectroscopy from quantum master equations.
Fetherolf, Jonathan H; Berkelbach, Timothy C
2017-12-28
We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.
Linear and nonlinear spectroscopy from quantum master equations
NASA Astrophysics Data System (ADS)
Fetherolf, Jonathan H.; Berkelbach, Timothy C.
2017-12-01
We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.
Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds
Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang
2015-01-01
Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059
Ritto, F G; Schmitt, A R M; Pimentel, T; Canellas, J V; Medeiros, P J
2018-02-01
The aim of this study was to determine whether virtual surgical planning (VSP) is an accurate method for positioning the maxilla when compared to conventional articulator model surgery (CMS), through the superimposition of computed tomography (CT) images. This retrospective study included the records of 30 adult patients submitted to bimaxillary orthognathic surgery. Two groups were created according to the treatment planning performed: CMS and VSP. The treatment planning protocol was the same for all patients. Pre- and postoperative CT images were superimposed and the linear distances between upper jaw reference points were measured. Measurements were then compared to the treatment planning, and the difference in accuracy between CMS and VSP was determined using the t-test for independent samples. The success criterion adopted was a mean linear difference of <2mm. The mean linear difference between planned and obtained movements for CMS was 1.27±1.05mm, and for VSP was 1.20±1.08mm. With CMS, 80% of overlapping reference points had a difference of <2mm, while for VSP this value was 83.6%. There was no statistically significant difference between the two techniques regarding accuracy (P>0.05). Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Parameter estimation using weighted total least squares in the two-compartment exchange model.
Garpebring, Anders; Löfstedt, Tommy
2018-01-01
The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Ortiz, Rocío; Antilén, Mónica; Speisky, Hernán; Aliaga, Margarita E; López-Alarcón, Camilo; Baugh, Steve
2012-01-01
A method was developed for microplate-based oxygen radicals absorbance capacity (ORAC) using pyrogallol red (PGR) as probe (ORAC-PGR). The method was evaluated for linearity, precision, and accuracy. In addition, the antioxidant capacity of commercial beverages, such as wines, fruit juices, and iced teas, was measured. Linearity of the area under the curve (AUC) versus Trolox concentration plots was [AUC = (845 +/- 110) + (23 +/- 2) [Trolox, microM]; R = 0.9961, n = 19]. Analyses showed better precision and accuracy at the highest Trolox concentration (40 microM) with RSD and recovery (REC) values of 1.7 and 101.0%, respectively. The method also showed good linearity for red wine [AUC = (787 +/- 77) + (690 +/- 60) [red wine, microL/mL]; R = 0.9926, n = 17], precision and accuracy with RSD values from 1.4 to 8.3%, and REC values that ranged from 89.7 to 103.8%. Red wines showed higher ORAC-PGR values than white wines, while the ORAC-PGR index of fruit juices and iced teas presented a wide range of results, from 0.6 to 21.6 mM of Trolox equivalents. Product-to-product variability was also observed for juices of the same fruit, showing the differences between brands on the ORAC-PGR index.
NASA Astrophysics Data System (ADS)
Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.
2015-09-01
With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.
SPEX: a highly accurate spectropolarimeter for atmospheric aerosol characterization
NASA Astrophysics Data System (ADS)
Rietjens, J. H. H.; Smit, J. M.; di Noia, A.; Hasekamp, O. P.; van Harten, G.; Snik, F.; Keller, C. U.
2017-11-01
Global characterization of atmospheric aerosol in terms of the microphysical properties of the particles is essential for understanding the role aerosols in Earth climate [1]. For more accurate predictions of future climate the uncertainties of the net radiative forcing of aerosols in the Earth's atmosphere must be reduced [2]. Essential parameters that are needed as input in climate models are not only the aerosol optical thickness (AOT), but also particle specific properties such as the aerosol mean size, the single scattering albedo (SSA) and the complex refractive index. The latter can be used to discriminate between absorbing and non-absorbing aerosol types, and between natural and anthropogenic aerosol. Classification of aerosol types is also very important for air-quality and health-related issues [3]. Remote sensing from an orbiting satellite platform is the only way to globally characterize atmospheric aerosol at a relevant timescale of 1 day [4]. One of the few methods that can be employed for measuring the microphysical properties of aerosols is to observe both radiance and degree of linear polarization of sunlight scattered in the Earth atmosphere under different viewing directions [5][6][7]. The requirement on the absolute accuracy of the degree of linear polarization PL is very stringent: the absolute error in PL must be smaller then 0.001+0.005.PL in order to retrieve aerosol parameters with sufficient accuracy to advance climate modelling and to enable discrimination of aerosol types based on their refractive index for air-quality studies [6][7]. In this paper we present the SPEX instrument, which is a multi-angle spectropolarimeter that can comply with the polarimetric accuracy needed for characterizing aerosols in the Earth's atmosphere. We describe the implementation of spectral polarization modulation in a prototype instrument of SPEX and show results of ground based measurements from which aerosol microphysical properties are retrieved.
Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J
2018-01-30
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.
2014-01-01
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518
NASA Astrophysics Data System (ADS)
Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-02-01
Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind
2014-08-15
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less
Lotfy, Hayam Mahmoud; Salem, Hesham; Abdelkawy, Mohammad; Samir, Ahmed
2015-04-05
Five spectrophotometric methods were successfully developed and validated for the determination of betamethasone valerate and fusidic acid in their binary mixture. Those methods are isoabsorptive point method combined with the first derivative (ISO Point--D1) and the recently developed and well established methods namely ratio difference (RD) and constant center coupled with spectrum subtraction (CC) methods, in addition to derivative ratio (1DD) and mean centering of ratio spectra (MCR). New enrichment technique called spectrum addition technique was used instead of traditional spiking technique. The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of official methods. The statistical comparison showed that there is no significant difference between the proposed methods and the official ones regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2015-01-01
The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.
NASA Astrophysics Data System (ADS)
Salem, Hesham; Mohamed, Dalia
2015-04-01
Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision.
Epoch-based Entropy for Early Screening of Alzheimer's Disease.
Houmani, N; Dreyfus, G; Vialatte, F B
2015-12-01
In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.
[Comparative measurement of urine specific gravity: reagent strips, refractometry and hydrometry].
Costa, Christian Elías; Bettendorff, Carolina; Bupo, Sol; Ayuso, Sandra; Vallejo, Graciela
2010-06-01
The urine specific gravity is commonly used in clinical practice to measure the renal concentration/dilution ability. Measurement can be performed by three methods: hydrometry, refractometry and reagent strips. To assess the accuracy of different methods to measure urine specific gravity. We analyzed 156 consecutive urine samples of pediatric patients during April and May 2007. Urine specific gravity was measured by hydrometry (UD), refractometry (RE) and reagent strips (TR), simultaneously. Urine osmolarity was considered as the gold standard and was measured by freezing point depression. Correlation between different methods was calculated by simple linear regression. A positive and acceptable correlation was found with osmolarity for the RE as for the UD (r= 0.81 and r= 0.86, respectively). The reagent strips presented low correlation (r= 0.46). Also, we found good correlation between measurements obtained by UD and RE (r= 0.89). Measurements obtained by TR, however, had bad correlation when compared to UD (r= 0.46). Higher values of specific gravity were observed when measured with RE with respect to UD. Reagent strips are not reliable for measuring urine specific gravity and should not be used as an usual test. However, hydrometry and refractometry are acceptable alternatives for measuring urine specific gravity, as long as the same method is used for follow-up.
Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models
NASA Technical Reports Server (NTRS)
Buchert, T.; Melott, A. L.; Weiss, A. G.
1993-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.
Mitchell, Alex J; Yadegarfar, Motahare; Gill, John; Stubbs, Brendon
2016-03-01
The Patient Health Questionnaire (PHQ) is the most commonly used measure to screen for depression in primary care but there is still lack of clarity about its accuracy and optimal scoring method. To determine via meta-analysis the diagnostic accuracy of the PHQ-9-linear, PHQ-9-algorithm and PHQ-2 questions to detect major depressive disorder (MDD) among adults. We systematically searched major electronic databases from inception until June 2015. Articles were included that reported the accuracy of PHQ-9 or PHQ-2 questions for diagnosing MDD in primary care defined according to standard classification systems. We carried out a meta-analysis, meta-regression, moderator and sensitivity analysis. Overall, 26 publications reporting on 40 individual studies were included representing 26 902 people (median 502, s.d.=693.7) including 14 760 unique adults of whom 14.3% had MDD. The methodological quality of the included articles was acceptable. The meta-analytic area under the receiver operating characteristic curve of the PHQ-9-linear and the PHQ-2 was significantly higher than the PHQ-9-algorithm, a difference that was maintained in head-to-head meta-analysis of studies. Our best estimates of sensitivity and specificity were 81.3% (95% CI 71.6-89.3) and 85.3% (95% CI 81.0-89.1), 56.8% (95% CI 41.2-71.8) and 93.3% (95% CI 87.5-97.3) and 89.3% (95% CI 81.5-95.1) and 75.9% (95% CI 70.1-81.3) for the PHQ-9-linear, PHQ-9-algorithm and PHQ-2 respectively. For case finding (ruling in a diagnosis), none of the methods were suitable but for screening (ruling out non-cases), all methods were encouraging with good clinical utility, although the cut-off threshold must be carefully chosen. The PHQ can be used as an initial first step assessment in primary care and the PHQ-2 is adequate for this purpose with good acceptability. However, neither the PHQ-2 nor the PHQ-9 can be used to confirm a clinical diagnosis (case finding). None. © The Royal College of Psychiatrists 2016. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence.
ERIC Educational Resources Information Center
Fante, Cheryl H.
This study was conducted in an attempt to identify any predictor or combination of predictors of a beginning typewriting student's success. Variables of intelligence, rhythmic ability, musical background, and tapping ability were combined to study their relationship to typewriting speed and accuracy. A sample of 109 high school students was…
Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K
2015-01-01
Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Ngo, L; Ho, H; Hunter, P; Quinn, K; Thomson, A; Pearson, G
2016-02-01
Post-mortem measurements (cold weight, grade and external carcass linear dimensions) as well as live animal data (age, breed, sex) were used to predict ovine primal and retail cut weights for 792 lamb carcases. Significant levels of variance could be explained using these predictors. The predictive power of those measurements on primal and retail cut weights was studied by using the results from principal component analysis and the absolute value of the t-statistics of the linear regression model. High prediction accuracy for primal cut weight was achieved (adjusted R(2) up to 0.95), as well as moderate accuracy for key retail cut weight: tenderloins (adj-R(2)=0.60), loin (adj-R(2)=0.62), French rack (adj-R(2)=0.76) and rump (adj-R(2)=0.75). The carcass cold weight had the best predictive power, with the accuracy increasing by around 10% after including the next three most significant variables. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
On the design of paleoenvironmental data networks for estimating large-scale patterns of climate
NASA Astrophysics Data System (ADS)
Kutzbach, J. E.; Guetter, P. J.
1980-09-01
Guidelines are determined for the spatial density and location of climatic variables (temperature and precipitation) that are appropriate for estimating the continental- to hemispheric-scale pattern of atmospheric circulation (sea-level pressure). Because instrumental records of temperature and precipitation simulate the climatic information that is contained in certain paleoenvironmental records (tree-ring, pollen, and written-documentary records, for example), these guidelines provide useful sampling strategies for reconstructing the pattern of atmospheric circulation from paleoenvironmental records. The statistical analysis uses a multiple linear regression model. The sampling strategies consist of changes in site density (from 0.5 to 2.5 sites per million square kilometers) and site location (from western North American sites only to sites in Japan, North America, and western Europe) of the climatic data. The results showed that the accuracy of specification of the pattern of sea-level pressure: (1) is improved if sites with climatic records are spread as uniformly as possible over the area of interest; (2) increases with increasing site density-at least up to the maximum site density used in this study; (3) is improved if sites cover an area that extends considerably beyond the limits of the area of interest. The accuracy of specification was lower for independent data than for the data that were used to develop the regression model; some skill was found for almost all sampling strategies.
Shahid, Mohammad; Shahzad Cheema, Muhammad; Klenner, Alexander; Younesi, Erfan; Hofmann-Apitius, Martin
2013-03-01
Systems pharmacological modeling of drug mode of action for the next generation of multitarget drugs may open new routes for drug design and discovery. Computational methods are widely used in this context amongst which support vector machines (SVM) have proven successful in addressing the challenge of classifying drugs with similar features. We have applied a variety of such SVM-based approaches, namely SVM-based recursive feature elimination (SVM-RFE). We use the approach to predict the pharmacological properties of drugs widely used against complex neurodegenerative disorders (NDD) and to build an in-silico computational model for the binary classification of NDD drugs from other drugs. Application of an SVM-RFE model to a set of drugs successfully classified NDD drugs from non-NDD drugs and resulted in overall accuracy of ∼80 % with 10 fold cross validation using 40 top ranked molecular descriptors selected out of total 314 descriptors. Moreover, SVM-RFE method outperformed linear discriminant analysis (LDA) based feature selection and classification. The model reduced the multidimensional descriptors space of drugs dramatically and predicted NDD drugs with high accuracy, while avoiding over fitting. Based on these results, NDD-specific focused libraries of drug-like compounds can be designed and existing NDD-specific drugs can be characterized by a well-characterized set of molecular descriptors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce
2009-01-01
We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005
Hamidi, Dachriyanus; Aulia, Hilyatul; Susanti, Meri
2017-01-01
Garcinia cowa is a medicinal plant widely grown in Southeast Asia and tropical countries. Various parts of this plant have been used in traditional folk medicine. The bark, latex, and root have been used as an antipyretic agent, while fruit and leaves have been used as an expectorant, for indigestion and improvement of blood circulation. This study aims to determine the concentration of rubraxanthone found in ethyl acetate extract of the stem bark of G. cowa by the high-performance thin-layer chromatography (HPTLC). HPTLC method was performed on precoated silica gel G 60 F254 plates using an HPTLC system with a developed mobile-phase system of chloroform: ethyl acetate: methanol: formic acid (86:6:3:5). A volume of 5 μL of standard and sample solutions was applied to the chromatographic plates. The plates were developed in saturated mode of twin trough chamber at room temperature. The method was validated based on linearity, accuracy, precision, limit of detection (LOD), limit of quantification (LOQ), and specificity. The spots were observed at ultraviolet 243 nm. The linearity of rubraxanthone was obtained between 52.5 and 157.5 ppm/spot. The LOD and LOQ were found to be 4.03 and 13.42 ppm/spot, respectively. The proposed method showed good linearity, precision, accuracy, and high sensitivity. Therefore, it may be applied for the quantification of rubraxanthone in ethyl acetate extract of the stem bark of G. cowa . High performance thin layer chromatography (HPTLC) method provides rapid qualitative and quantitative estimation of rubraxanthone as a marker com¬pound in G. cowa extract used for commercial productRubraxanthone found in ethyl acetate extracts of G. cowa was successfully quantified using HPTLC method. Abbreviations Used : TLC: Thin-layer chromatography, HPTLC: High-performance thin-layer chromatography, LOD: Limit of detection, LOQ: Limit of quantification, ICH: International Conference on Harmonization.
Limb Dominance Results from Asymmetries in Predictive and Impedance Control Mechanisms
Yadav, Vivek; Sainburg, Robert L.
2014-01-01
Handedness is a pronounced feature of human motor behavior, yet the underlying neural mechanisms remain unclear. We hypothesize that motor lateralization results from asymmetries in predictive control of task dynamics and in control of limb impedance. To test this hypothesis, we present an experiment with two different force field environments, a field with a predictable magnitude that varies with the square of velocity, and a field with a less predictable magnitude that varies linearly with velocity. These fields were designed to be compatible with controllers that are specialized in predicting limb and task dynamics, and modulating position and velocity dependent impedance, respectively. Because the velocity square field does not change the form of the equations of motion for the reaching arm, we reasoned that a forward dynamic-type controller should perform well in this field, while control of linear damping and stiffness terms should be less effective. In contrast, the unpredictable linear field should be most compatible with impedance control, but incompatible with predictive dynamics control. We measured steady state final position accuracy and 3 trajectory features during exposure to these fields: Mean squared jerk, Straightness, and Movement time. Our results confirmed that each arm made straighter, smoother, and quicker movements in its compatible field. Both arms showed similar final position accuracies, which were achieved using more extensive corrective sub-movements when either arm performed in its incompatible field. Finally, each arm showed limited adaptation to its incompatible field. Analysis of the dependence of trajectory errors on field magnitude suggested that dominant arm adaptation occurred by prediction of the mean field, thus exploiting predictive mechanisms for adaptation to the unpredictable field. Overall, our results support the hypothesis that motor lateralization reflects asymmetries in specific motor control mechanisms associated with predictive control of limb and task dynamics, and modulation of limb impedance. PMID:24695543
Gerona, Roy; Wen, Anita; Chin, Aaron T.; Koss, Catherine A.; Bacchetti, Peter; Metcalfe, John; Gandhi, Monica
2016-01-01
Background Tuberculosis (TB) is the leading cause of death from an infectious pathogen worldwide and the most prevalent opportunistic infection in people living with HIV. Isoniazid preventive therapy (IPT) reduces the incidence of active TB and reduces morbidity and mortality in HIV-infected patients independently of antiretroviral therapy. However, treatment of latent or active TB is lengthy and inter-patient variability in pharmacokinetics and adherence common. Current methods of assessing adherence to TB treatment using drug levels in plasma or urine assess short-term exposure and pose logistical challenges. Drug concentrations in hair assess long-term exposure and have demonstrated pharmacodynamic relevance in HIV. Methods A large hair sample from a patient with active TB was obtained for assay development. Methods to pulverize hair and extract isoniazid were optimized and then the drug detected by liquid chromatography/ tandem mass spectrometry (LC/MS-MS). The method was validated for specificity, accuracy, precision, recovery, linearity and stability to establish the assay’s suitability for therapeutic drug monitoring (TDM). Hair samples from patients on directly-observe isoniazid-based latent or active TB therapy from the San Francisco Department of Public Health TB clinic were then tested. Results Our LC/MS-MS-based assay detected isoniazid in quantities as low as 0.02ng/mg using 10–25 strands hair. Concentrations in spiked samples demonstrated linearity from 0.05–50ng/mg. Assay precision and accuracy for spiked quality-control samples were high, with an overall recovery rate of 79.5%. In 18 patients with latent or active TB on treatment, isoniazid was detected across a wide linear dynamic range. Conclusions An LC-MS/MS-based assay to quantify isoniazid levels in hair with performance characteristics suitable for TDM was developed and validated. Hair concentrations of isoniazid assess long-term exposure and may be useful for monitoring adherence to latent or active TB treatment in the setting of HIV. PMID:27191185
NASA Technical Reports Server (NTRS)
Herskovits, E. H.; Itoh, R.; Melhem, E. R.
2001-01-01
OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.
Raji, Cyrus A; Willeumier, Kristen; Taylor, Derek; Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A; Amen, Daniel G
2015-09-01
PTSD and TBI are two common conditions in veteran populations that can be difficult to distinguish clinically. The default mode network (DMN) is abnormal in a multitude of neurological and psychiatric disorders. We hypothesize that brain perfusion SPECT can be applied to diagnostically separate PTSD from TBI reliably in a veteran cohort using DMN regions. A group of 196 veterans (36 with PTSD, 115 with TBI, 45 with PTSD/TBI) were selected from a large multi-site population cohort of individuals with psychiatric disease. Inclusion criteria were peacetime or wartime veterans regardless of branch of service and included those for whom the traumatic brain injury was not service related. SPECT imaging was performed on this group both at rest and during a concentration task. These measures, as well as the baseline-concentration difference, were then inputted from DMN regions into separate binary logistic regression models controlling for age, gender, race, clinic site, co-morbid psychiatric diseases, TBI severity, whether or not the TBI was service related, and branch of armed service. Predicted probabilities were then inputted into a receiver operating characteristic analysis to compute sensitivity, specificity, and accuracy. Compared to PSTD, persons with TBI were older, male, and had higher rates of bipolar and major depressive disorder (p < 0.05). Baseline quantitative regions with SPECT separated PTSD from TBI in the veterans with 92 % sensitivity, 85 % specificity, and 94 % accuracy. With concentration scans, there was 85 % sensitivity, 83 % specificity and 89 % accuracy. Baseline-concentration (the difference metric between the two scans) scans were 85 % sensitivity, 80 % specificity, and 87 % accuracy. In separating TBI from PTSD/TBI visual readings of baseline scans had 85 % sensitivity, 81 % specificity, and 83 % accuracy. Concentration scans had 80 % sensitivity, 65 % specificity, and 79 % accuracy. Baseline-concentration scans had 82 % sensitivity, 69 % specificity, and 81 % accuracy. For separating PTSD from PTSD/TBI baseline scans had 87 % sensitivity, 83 % specificity, and 92 % accuracy. Concentration scans had 91 % sensitivity, 76 % specificity, and 88 % accuracy. Baseline-concentration scans had 84 % sensitivity, 64 % specificity, and 85 % accuracy. This study demonstrates the ability to separate PTSD and TBI from each other in a veteran population using functional neuroimaging.
Accuracy Study of a 2-Component Point Doppler Velocimeter (PDV)
NASA Technical Reports Server (NTRS)
Kuhlman, John; Naylor, Steve; James, Kelly; Ramanath, Senthil
1997-01-01
A two-component Point Doppler Velocimeter (PDV) which has recently been developed is described, and a series of velocity measurements which have been obtained to quantify the accuracy of the PDV system are summarized. This PDV system uses molecular iodine vapor cells as frequency discriminating filters to determine the Doppler shift of laser light which is scattered off of seed particles in a flow. The majority of results which have been obtained to date are for the mean velocity of a rotating wheel, although preliminary data are described for fully-developed turbulent pipe flow. Accuracy of the present wheel velocity data is approximately +/- 1 % of full scale, while linearity of a single channel is on the order of +/- 0.5 % (i.e., +/- 0.6 m/sec and +/- 0.3 m/sec, out of 57 m/sec, respectively). The observed linearity of these results is on the order of the accuracy to which the speed of the rotating wheel has been set for individual data readings. The absolute accuracy of the rotating wheel data is shown to be consistent with the level of repeatability of the cell calibrations. The preliminary turbulent pipe flow data show consistent turbulence intensity values, and mean axial velocity profiles generally agree with pitot probe data. However, there is at present an offset error in the radial velocity which is on the order of 5-10 % of the mean axial velocity.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
NASA Astrophysics Data System (ADS)
Yang, Jian; He, Yuhong
2017-02-01
Quantifying impervious surfaces in urban and suburban areas is a key step toward a sustainable urban planning and management strategy. With the availability of fine-scale remote sensing imagery, automated mapping of impervious surfaces has attracted growing attention. However, the vast majority of existing studies have selected pixel-based and object-based methods for impervious surface mapping, with few adopting sub-pixel analysis of high spatial resolution imagery. This research makes use of a vegetation-bright impervious-dark impervious linear spectral mixture model to characterize urban and suburban surface components. A WorldView-3 image acquired on May 9th, 2015 is analyzed for its potential in automated unmixing of meaningful surface materials for two urban subsets and one suburban subset in Toronto, ON, Canada. Given the wide distribution of shadows in urban areas, the linear spectral unmixing is implemented in non-shadowed and shadowed areas separately for the two urban subsets. The results indicate that the accuracy of impervious surface mapping in suburban areas reaches up to 86.99%, much higher than the accuracies in urban areas (80.03% and 79.67%). Despite its merits in mapping accuracy and automation, the application of our proposed vegetation-bright impervious-dark impervious model to map impervious surfaces is limited due to the absence of soil component. To further extend the operational transferability of our proposed method, especially for the areas where plenty of bare soils exist during urbanization or reclamation, it is still of great necessity to mask out bare soils by automated classification prior to the implementation of linear spectral unmixing.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Kulikov, A U; Zinchenko, A A
2007-02-19
This paper describes the validation of an isocratic HPLC method for the assay of dexpanthenol in aerosol and gel. The method employs the Vydac Proteins C4 column with a mobile phase of aqueous solution of trifluoroacetic acid and UV detection at 206 nm. A linear response (r>0.9999) was observed in the range of 13.0-130 microg mL(-1). The method shows good recoveries and intra and inter-day relative standard deviations were less than 1.0%. Validation parameters as specificity, accuracy and robustness were also determined. The method can be used for dexpanthenol assay of panthenol aerosol and gel with dexpanthenol as the method separates dexpanthenol from aerosol or gel excipients.
de Macedo, A. N.; Vicente, G. H. L.; Nogueira, A. R. A.
2010-01-01
A method for the determination of pesticide residues in water and sediment was developed using the QuEChERS method followed by gas chromatography – mass spectrometry. The method was validated in terms of accuracy, specificity, linearity, detection and quantification limits. The recovery percentages obtained for the pesticides in water at different concentrations ranged from 63 to 116%, with relative standard deviations below 12%. The corresponding results from the sediment ranged from 48 to 115% with relative standard deviations below 16%. The limits of detection for the pesticides in water and sediment were below 0.003 mg L−1 and 0.02 mg kg−1, respectively. PMID:21165598
Heddam, Salim
2014-11-01
The prediction of colored dissolved organic matter (CDOM) using artificial neural network approaches has received little attention in the past few decades. In this study, colored dissolved organic matter (CDOM) was modeled using generalized regression neural network (GRNN) and multiple linear regression (MLR) models as a function of Water temperature (TE), pH, specific conductance (SC), and turbidity (TU). Evaluation of the prediction accuracy of the models is based on the root mean square error (RMSE), mean absolute error (MAE), coefficient of correlation (CC), and Willmott's index of agreement (d). The results indicated that GRNN can be applied successfully for prediction of colored dissolved organic matter (CDOM).
Stern, K I; Malkova, T L
The objective of the present study was the development and validation of sibutramine demethylated derivatives, desmethyl sibutramine and didesmethyl sibutramine. Gas-liquid chromatography with the flame ionization detector was used for the quantitative determination of the above substances in dietary supplements. The conditions for the chromatographic determination of the analytes in the presence of the reference standard, methyl stearate, were proposed allowing to achieve the efficient separation. The method has the necessary sensitivity, specificity, linearity, accuracy, and precision (on the intra-day and inter-day basis) which suggests its good validation characteristics. The proposed method can be employed in the analytical laboratories for the quantitative determination of sibutramine derivatives in biologically active dietary supplements.
Speaker normalization and adaptation using second-order connectionist networks.
Watrous, R L
1993-01-01
A method for speaker normalization and adaption using connectionist networks is developed. A speaker-specific linear transformation of observations of the speech signal is computed using second-order network units. Classification is accomplished by a multilayer feedforward network that operates on the normalized speech data. The network is adapted for a new talker by modifying the transformation parameters while leaving the classifier fixed. This is accomplished by backpropagating classification error through the classifier to the second-order transformation units. This method was evaluated for the classification of ten vowels for 76 speakers using the first two formant values of the Peterson-Barney data. The results suggest that rapid speaker adaptation resulting in high classification accuracy can be accomplished by this method.
Benchmark solution of the dynamic response of a spherical shell at finite strain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Brock, Jerry S.
2016-09-28
Our paper describes the development of high fidelity solutions for the study of homogeneous (elastic and inelastic) spherical shells subject to dynamic loading and undergoing finite deformations. The goal of the activity is to provide high accuracy results that can be used as benchmark solutions for the verification of computational physics codes. Furthermore, the equilibrium equations for the geometrically non-linear problem are solved through mode expansion of the displacement field and the boundary conditions are enforced in a strong form. Time integration is performed through high-order implicit Runge–Kutta schemes. Finally, we evaluate accuracy and convergence of the proposed method bymore » means of numerical examples with finite deformations and material non-linearities and inelasticity.« less
LBP and SIFT based facial expression recognition
NASA Astrophysics Data System (ADS)
Sumer, Omer; Gunes, Ece O.
2015-02-01
This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.
Comparison of Classification Methods for P300 Brain-Computer Interface on Disabled Subjects
Manyakov, Nikolay V.; Chumerin, Nikolay; Combaz, Adrien; Van Hulle, Marc M.
2011-01-01
We report on tests with a mind typing paradigm based on a P300 brain-computer interface (BCI) on a group of amyotrophic lateral sclerosis (ALS), middle cerebral artery (MCA) stroke, and subarachnoid hemorrhage (SAH) patients, suffering from motor and speech disabilities. We investigate the achieved typing accuracy given the individual patient's disorder, and how it correlates with the type of classifier used. We considered 7 types of classifiers, linear as well as nonlinear ones, and found that, overall, one type of linear classifier yielded a higher classification accuracy. In addition to the selection of the classifier, we also suggest and discuss a number of recommendations to be considered when building a P300-based typing system for disabled subjects. PMID:21941530
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
Modified linear predictive coding approach for moving target tracking by Doppler radar
NASA Astrophysics Data System (ADS)
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
NASA Astrophysics Data System (ADS)
Sisay, Z. G.; Besha, T.; Gessesse, B.
2017-05-01
This study used in-situ GPS data to validate the accuracy of horizontal coordinates and orientation of linear features of orthophoto and line map for Bahir Dar city. GPS data is processed using GAMIT/GLOBK and Lieca GeoOfice (LGO) in a least square sense with a tie to local and regional GPS reference stations to predict horizontal coordinates at five checkpoints. Real-Time-Kinematic GPS measurement technique is used to collect the coordinates of road centerline to test the accuracy associated with the orientation of the photogrammetric line map. The accuracy of orthophoto was evaluated by comparing with in-situ GPS coordinates and it is in a good agreement with a root mean square error (RMSE) of 12.45 cm in x- and 13.97 cm in y-coordinates, on the other hand, 6.06 cm with 95 % confidence level - GPS coordinates from GAMIT/GLOBK. Whereas, the horizontal coordinates of the orthophoto are in agreement with in-situ GPS coordinates at an accuracy of 16.71 cm and 18.98 cm in x and y-directions respectively and 11.07 cm with 95 % confidence level - GPS data is processed by LGO and a tie to local GPS network. Similarly, the accuracy of linear feature is in a good fit with in-situ GPS measurement. The GPS coordinates of the road centerline deviates from the corresponding coordinates of line map by a mean value of 9.18 cm in x- direction and -14.96 cm in y-direction. Therefore, it can be concluded that, the accuracy of the orthophoto and line map is within the national standard of error budget ( 25 cm).
Peraman, Ramalingam; Mallikarjuna, Sasikala; Ammineni, Pravalika; Kondreddy, Vinod kumar
2014-10-01
A simple, selective, rapid, precise and economical reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for simultaneous estimation of atorvastatin calcium (ATV) and pioglitazone hydrochloride (PIO) from pharmaceutical formulation. The method is carried out on a C8 (25 cm × 4.6 mm i.d., 5 μm) column with a mobile phase consisting of acetonitrile (ACN):water (pH adjusted to 6.2 using o-phosphoric acid) in the ratio of 45:55 (v/v). The retention time of ATV and PIO is 4.1 and 8.1 min, respectively, with the flow rate of 1 mL/min with diode array detector detection at 232 nm. The linear regression analysis data from the linearity plot showed good linear relationship with a correlation coefficient (R(2)) value for ATV and PIO of 0.9998 and 0.9997 in the concentration range of 10-80 µg mL(-1), respectively. The relative standard deviation for intraday precision has been found to be <2.0%. The method is validated according to the ICH guidelines. The developed method is validated in terms of specificity, selectivity, accuracy, precision, linearity, limit of detection, limit of quantitation and solution stability. The proposed method can be used for simultaneous estimation of these drugs in marketed dosage forms. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Predicting birth weight with conditionally linear transformation models.
Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten
2016-12-01
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.
Sexual dimorphism of the mandible in a contemporary Chinese Han population.
Dong, Hongmei; Deng, Mohong; Wang, WenPeng; Zhang, Ji; Mu, Jiao; Zhu, Guanghui
2015-10-01
A present limitation of forensic anthropology practice in China is the lack of population-specific criteria on contemporary human skeletons. In this study, a sample of 203 maxillofacial Cone beam computed tomography (CBCT) images, including 96 male and 107 female cases (20-65 years old), was analyzed to explore mandible sexual dimorphism in a population of contemporary adult Han Chinese to investigate the potential use of the mandible as sex indicator. A three-dimensional image from mandible CBCT scans was reconstructed using the SimPlant Pro 11.40 software. Nine linear and two angular parameters were measured. Discriminant function analysis (DFA) and logistic regression analysis (LRA) were used to develop the mathematics models for sex determination. All of the linear measurements studied and one angular measurement were found to be sexually dimorphic, with the maximum mandibular length and bi-condylar breadth being the most dimorphic by univariate DFA and LRA respectively. The cross-validated sex allocation accuracies on multivariate were ranged from 84.2% (direct DFA), 83.5% (direct LRA), 83.3% (stepwise DFA) to 80.5% (stepwise LRA). In general, multivariate DFA yielded a higher accuracy and LRA obtained a lower sex bias, and therefore both DFA and LRA had their own advantages for sex determination by the mandible in this sample. These results suggest that the mandible expresses sexual dimorphism in the contemporary adult Han Chinese population, indicating an excellent sexual discriminatory ability. Cone beam computed tomography scanning can be used as alternative source for contemporary osteometric techniques. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bourjaily, Mark A.
2012-01-01
Animals must often make opposing responses to similar complex stimuli. Multiple sensory inputs from such stimuli combine to produce stimulus-specific patterns of neural activity. It is the differences between these activity patterns, even when small, that provide the basis for any differences in behavioral response. In the present study, we investigate three tasks with differing degrees of overlap in the inputs, each with just two response possibilities. We simulate behavioral output via winner-takes-all activity in one of two pools of neurons forming a biologically based decision-making layer. The decision-making layer receives inputs either in a direct stimulus-dependent manner or via an intervening recurrent network of neurons that form the associative layer, whose activity helps distinguish the stimuli of each task. We show that synaptic facilitation of synapses to the decision-making layer improves performance in these tasks, robustly increasing accuracy and speed of responses across multiple configurations of network inputs. Conversely, we find that synaptic depression worsens performance. In a linearly nonseparable task with exclusive-or logic, the benefit of synaptic facilitation lies in its superlinear transmission: effective synaptic strength increases with presynaptic firing rate, which enhances the already present superlinearity of presynaptic firing rate as a function of stimulus-dependent input. In linearly separable single-stimulus discrimination tasks, we find that facilitating synapses are always beneficial because synaptic facilitation always enhances any differences between inputs. Thus we predict that for optimal decision-making accuracy and speed, synapses from sensory or associative areas to decision-making or premotor areas should be facilitating. PMID:22457467
Reference measurement procedure for total glycerides by isotope dilution GC-MS.
Edwards, Selvin H; Stribling, Shelton L; Pyatt, Susan D; Kimberly, Mary M
2012-04-01
The CDC's Lipid Standardization Program established the chromotropic acid (CA) reference measurement procedure (RMP) as the accuracy base for standardization and metrological traceability for triglyceride testing. The CA RMP has several disadvantages, including lack of ruggedness. It uses obsolete instrumentation and hazardous reagents. To overcome these problems the CDC developed an isotope dilution GC-MS (ID-GC-MS) RMP for total glycerides in serum. We diluted serum samples with Tris-HCl buffer solution and spiked 200-μL aliquots with [(13)C(3)]-glycerol. These samples were incubated and hydrolyzed under basic conditions. The samples were dried, derivatized with acetic anhydride and pyridine, extracted with ethyl acetate, and analyzed by ID-GC-MS. Linearity, imprecision, and accuracy were evaluated by analyzing calibrator solutions, 10 serum pools, and a standard reference material (SRM 1951b). The calibration response was linear for the range of calibrator concentrations examined (0-1.24 mmol/L) with a slope and intercept of 0.717 (95% CI, 0.7123-0.7225) and 0.3122 (95% CI, 0.3096-0.3140), respectively. The limit of detection was 14.8 μmol/L. The mean %CV for the sample set (serum pools and SRM) was 1.2%. The mean %bias from NIST isotope dilution MS values for SRM 1951b was 0.7%. This ID-GC-MS RMP has the specificity and ruggedness to accurately quantify total glycerides in the serum pools used in the CDC's Lipid Standardization Program and demonstrates sufficiently acceptable agreement with the NIST primary RMP for total glyceride measurement.
Suh, Joon Hyuk; Han, Sang Beom; Wang, Yu
2018-02-02
Despite their importance in pivotal signaling pathways due to trace quantities and complex matrices, the analysis of plant hormones is a challenge. Here, to improve this issue, we present an electromembrane extraction technology combined with liquid chromatography-tandem mass spectrometry for determination of acidic plant hormones including jasmonic acid, abscisic acid, salicylic acid, benzoic acid, gibberellic acid and gibberellin A 4 in plant tissues. Factors influencing extraction efficiency, such as voltage, extraction time and stirring rate were optimized using a design of experiments. Analytical performance was evaluated in terms of specificity, linearity, limit of quantification, precision, accuracy, recovery and repeatability. The results showed good linearity (r 2 > 0.995), precision and acceptable accuracy. The limit of quantification ranged from 0.1 to 10 ng mL -1 , and the recoveries were 34.6-50.3%. The developed method was applied in citrus leaf samples, showing better clean-up efficiency, as well as higher sensitivity compared to a previous method using liquid-liquid extraction. Organic solvent consumption was minimized during the process, making it an appealing method. More noteworthy, electromembrane extraction has been scarcely applied to plant tissues, and this is the first time that major plant hormones were extracted using this technology, with high sensitivity and selectivity. Taken together, this work gives not only a novel sample preparation platform using an electric field for plant hormones, but also a good example of extracting complex plant tissues in a simple and effective way. Copyright © 2017 Elsevier B.V. All rights reserved.
Dhole, Seema M; Khedekar, Pramod B; Amnerkar, Nikhil D
2012-07-01
Repaglinide is a miglitinide class of antidiabetic drug used for the treatment of type 2 diabetes mellitus. A fast and reliable method for the determination of repaglinide was highly desirable to support formulation screening and quality control. UV spectrophotometric and reversed-phase high performance liquid chromatography (RP-HPLC) methods were developed for determination of repaglinide in the tablet dosage form. The UV spectrum recorded between 200 400 nm using methanol as solvent and the wavelength 241 nm was selected for the determination of repaglinide. RP-HPLC analysis was carried out using Agilent TC-C18 (2) column and mobile phase composed of methanol and water (80:20 v/v, pH adjusted to 3.5 with orthophosphoric acid) at a flow rate of 1.0 ml/min. Parameters such as linearity, precision, accuracy, recovery, specificity and ruggedness are studied as reported in the International Conference on Harmonization (ICH) guidelines. The developed methods illustrated excellent linearity (r(2) > 0.999) in the concentration range of 5-30 μg/ml and 5-50 μg/ml for UV spectrophotometric and HPLC methods, respectively. Precision (%R.S.D < 1.50) and mean recoveries were found in the range of 99.63-100.45% for UV spectrophotometric method and 99.71-100.25% for HPLC method which shows accuracy of the methods. The developed methods were found to be reliable, simple, fast, accurate and successfully used for the quality control of repaglinide as a bulk drug and in pharmaceutical formulations.
Dhole, Seema M.; Khedekar, Pramod B.; Amnerkar, Nikhil D.
2012-01-01
Background: Repaglinide is a miglitinide class of antidiabetic drug used for the treatment of type 2 diabetes mellitus. A fast and reliable method for the determination of repaglinide was highly desirable to support formulation screening and quality control. Objective: UV spectrophotometric and reversed-phase high performance liquid chromatography (RP-HPLC) methods were developed for determination of repaglinide in the tablet dosage form. Materials and Methods: The UV spectrum recorded between 200 400 nm using methanol as solvent and the wavelength 241 nm was selected for the determination of repaglinide. RP-HPLC analysis was carried out using Agilent TC-C18 (2) column and mobile phase composed of methanol and water (80:20 v/v, pH adjusted to 3.5 with orthophosphoric acid) at a flow rate of 1.0 ml/min. Parameters such as linearity, precision, accuracy, recovery, specificity and ruggedness are studied as reported in the International Conference on Harmonization (ICH) guidelines. Results: The developed methods illustrated excellent linearity (r2 > 0.999) in the concentration range of 5-30 μg/ml and 5-50 μg/ml for UV spectrophotometric and HPLC methods, respectively. Precision (%R.S.D < 1.50) and mean recoveries were found in the range of 99.63-100.45% for UV spectrophotometric method and 99.71-100.25% for HPLC method which shows accuracy of the methods. Conclusion: The developed methods were found to be reliable, simple, fast, accurate and successfully used for the quality control of repaglinide as a bulk drug and in pharmaceutical formulations. PMID:23781481
Battistella, G; Fuertinger, S; Fleysher, L; Ozelius, L J; Simonyan, K
2016-10-01
Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. We used a combination of independent component analysis and linear discriminant analysis of resting-state functional magnetic resonance imaging data to investigate brain organization in different SD phenotypes (abductor versus adductor type) and putative genotypes (familial versus sporadic cases) and to characterize neural markers for genotype/phenotype categorization. We found abnormal functional connectivity within sensorimotor and frontoparietal networks in patients with SD compared with healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortices. When categorizing between different forms of SD, the combination of measures from the left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. © 2016 EAN.
Battistella, Giovanni; Fuertinger, Stefan; Fleysher, Lazar; Ozelius, Laurie J.; Simonyan, Kristina
2017-01-01
Background Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. Methods We used a combination of independent component analysis and linear discriminant analysis of resting-state functional MRI data to investigate brain organization in different SD phenotypes (abductor vs. adductor type) and putative genotypes (familial vs. sporadic cases) and to characterize neural markers for genotype/phenotype categorization. Results We found abnormal functional connectivity within sensorimotor and frontoparietal networks in SD patients compared to healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortex. When categorizing between different forms of SD, the combination of measures from left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Conclusions Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. PMID:27346568
Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H
2016-02-15
This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.
2016-02-01
This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.
Accuracy of patient-specific guided glenoid baseplate positioning for reverse shoulder arthroplasty.
Levy, Jonathan C; Everding, Nathan G; Frankle, Mark A; Keppler, Louis J
2014-10-01
The accuracy of reproducing a surgical plan during shoulder arthroplasty is improved by computer assistance. Intraoperative navigation, however, is challenged by increased surgical time and additional technically difficult steps. Patient-matched instrumentation has the potential to reproduce a similar degree of accuracy without the need for additional surgical steps. The purpose of this study was to examine the accuracy of patient-specific planning and a patient-specific drill guide for glenoid baseplate placement in reverse shoulder arthroplasty. A patient-specific glenoid baseplate drill guide for reverse shoulder arthroplasty was produced for 14 cadaveric shoulders based on a plan developed by a virtual preoperative 3-dimensional planning system using thin-cut computed tomography images. Using this patient-specific guide, high-volume shoulder surgeons exposed the glenoid through a deltopectoral approach and drilled the bicortical pathway defined by the guide. The trajectory of the drill path was compared with the virtual preoperative planned position using similar thin-cut computed tomography images to define accuracy. The drill pathway defined by the patient-matched guide was found to be highly accurate when compared with the preoperative surgical plan. The translational accuracy was 1.2 ± 0.7 mm. The accuracy of inferior tilt was 1.2° ± 1.2°. The accuracy of glenoid version was 2.6° ± 1.7°. The use of patient-specific glenoid baseplate guides is highly accurate in reproducing a virtual 3-dimensional preoperative plan. This technique delivers the accuracy observed using computerized navigation without any additional surgical steps or technical challenges. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
Racette, Lyne; Chiou, Christine Y.; Hao, Jiucang; Bowd, Christopher; Goldbaum, Michael H.; Zangwill, Linda M.; Lee, Te-Won; Weinreb, Robert N.; Sample, Pamela A.
2009-01-01
Purpose To investigate whether combining optic disc topography and short-wavelength automated perimetry (SWAP) data improves the diagnostic accuracy of relevance vector machine (RVM) classifiers for detecting glaucomatous eyes compared to using each test alone. Methods One eye of 144 glaucoma patients and 68 healthy controls from the Diagnostic Innovations in Glaucoma Study were included. RVM were trained and tested with cross-validation on optimized (backward elimination) SWAP features (thresholds plus age; pattern deviation (PD); total deviation (TD)) and on Heidelberg Retina Tomograph II (HRT) optic disc topography features, independently and in combination. RVM performance was also compared to two HRT linear discriminant functions (LDF) and to SWAP mean deviation (MD) and pattern standard deviation (PSD). Classifier performance was measured by the area under the receiver operating characteristic curves (AUROCs) generated for each feature set and by the sensitivities at set specificities of 75%, 90% and 96%. Results RVM trained on combined HRT and SWAP thresholds plus age had significantly higher AUROC (0.93) than RVM trained on HRT (0.88) and SWAP (0.76) alone. AUROCs for the SWAP global indices (MD: 0.68; PSD: 0.72) offered no advantage over SWAP thresholds plus age, while the LDF AUROCs were significantly lower than RVM trained on the combined SWAP and HRT feature set and on HRT alone feature set. Conclusions Training RVM on combined optimized HRT and SWAP data improved diagnostic accuracy compared to training on SWAP and HRT parameters alone. Future research may identify other combinations of tests and classifiers that can also improve diagnostic accuracy. PMID:19528827
Shinde, P B; Aragade, P D; Agrawal, M R; Deokate, U A; Khadabadi, S S
2011-01-01
The objective of this work was to develop and validate a simple, rapid, precise, and accurate high performance thin layer chromatography method for simultaneous determination of withanolide A and bacoside A in combined dosage form. The stationary phase used was silica gel G60F254. The mobile phase used was mixture of ethyl acetate: methanol: toluene: water (4:1:1:0.5 v/v/v/v). The detection of spots was carried out at 320 nm using absorbance reflectance mode. The method was validated in terms of linearity, accuracy, precision and specificity. The calibration curve was found to be linear between 200 to 800 ng/spot for withanolide A and 50 to 350 ng/spot for bacoside A. The limit of detection and limit of quantification for the withanolide A were found to be 3.05 and 10.06 ng/spot, respectively and for bacoside A 8.3 and 27.39 ng/spot, respectively. The proposed method can be successfully used to determine the drug content of marketed formulation. PMID:22303073
Shinde, P B; Aragade, P D; Agrawal, M R; Deokate, U A; Khadabadi, S S
2011-03-01
The objective of this work was to develop and validate a simple, rapid, precise, and accurate high performance thin layer chromatography method for simultaneous determination of withanolide A and bacoside A in combined dosage form. The stationary phase used was silica gel G60F(254). The mobile phase used was mixture of ethyl acetate: methanol: toluene: water (4:1:1:0.5 v/v/v/v). The detection of spots was carried out at 320 nm using absorbance reflectance mode. The method was validated in terms of linearity, accuracy, precision and specificity. The calibration curve was found to be linear between 200 to 800 ng/spot for withanolide A and 50 to 350 ng/spot for bacoside A. The limit of detection and limit of quantification for the withanolide A were found to be 3.05 and 10.06 ng/spot, respectively and for bacoside A 8.3 and 27.39 ng/spot, respectively. The proposed method can be successfully used to determine the drug content of marketed formulation.
Metrics for linear kinematic features in sea ice
NASA Astrophysics Data System (ADS)
Levy, G.; Coon, M.; Sulsky, D.
2006-12-01
The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián
2016-08-01
S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure. For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear. The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes. The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.
Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S
2011-07-01
To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.
Monthly monsoon rainfall forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Ganti, Ravikumar
2014-10-01
Indian agriculture sector heavily depends on monsoon rainfall for successful harvesting. In the past, prediction of rainfall was mainly performed using regression models, which provide reasonable accuracy in the modelling and forecasting of complex physical systems. Recently, Artificial Neural Networks (ANNs) have been proposed as efficient tools for modelling and forecasting. A feed-forward multi-layer perceptron type of ANN architecture trained using the popular back-propagation algorithm was employed in this study. Other techniques investigated for modeling monthly monsoon rainfall include linear and non-linear regression models for comparison purposes. The data employed in this study include monthly rainfall and monthly average of the daily maximum temperature in the North Central region in India. Specifically, four regression models and two ANN model's were developed. The performance of various models was evaluated using a wide variety of standard statistical parameters and scatter plots. The results obtained in this study for forecasting monsoon rainfalls using ANNs have been encouraging. India's economy and agricultural activities can be effectively managed with the help of the availability of the accurate monsoon rainfall forecasts.
UV Spectrophotometric Determination and Validation of Hydroquinone in Liposome.
Khoshneviszadeh, Rabea; Fazly Bazzaz, Bibi Sedigheh; Housaindokht, Mohammad Reza; Ebrahim-Habibi, Azadeh; Rajabi, Omid
2015-01-01
The method has been developed and validated for the determination of hydroquinone in liposomal formulation. The samples were dissolved in methanol and evaluated in 293 nm. The validation parameters such as linearity, accuracy, precision, specificity, limit of detection (LOD) and limit of quantitation (LOQ) were determined. The calibration curve was linear in 1-50 µg/mL range of hydroquinone analyte with a regression coefficient of 0.9998. This study showed that the liposomal hydroquinone composed of phospholipid (7.8 %), cholesterol (1.5 %), alpha ketopherol (0.17 %) and hydroquinone (0.5 %) did not absorb wavelength of 293 nm if it diluted 500 times by methanol. The concentration of hydroquinone reached 10 µg/mL after 500 times of dilution. Furthermore, various validation parameters as per ICH Q2B guideline were tested and found accordingly. The recovery percentages of liposomal hydroquinone were found 102 ± 0.8, 99 ± 0.2 and 98 ± 0.4 for 80%, 100% and 120% respectively. The relative standard deviation values of inter and intra-day precisions were <%2. LOD and LOQ were 0.24 and 0.72 µg/mL respectively.
Zhang, Meng-Qi; Jia, Jing-Ying; Lu, Chuan; Liu, Gang-Yi; Yu, Cheng-Yin; Gui, Yu-Zhou; Liu, Yun; Liu, Yan-Mei; Wang, Wei; Li, Shui-Jun; Yu, Chen
2010-06-01
A simple, reliable and sensitive liquid chromatography-isotope dilution mass spectrometry (LC-ID/MS) was developed and validated for quantification of olanzapine in human plasma. Plasma samples (50 microL) were extracted with tert-butyl methyl ether and isotope-labeled internal standard (olanzapine-D3) was used. The chromatographic separation was performed on XBridge Shield RP 18 (100 mm x 2.1 mm, 3.5 microm, Waters). An isocratic program was used at a flow rate of 0.4 m x min(-1) with mobile phase consisting of acetonitrile and ammonium buffer (pH 8). The protonated ions of analytes were detected in positive ionization by multiple reactions monitoring (MRM) mode. The plasma method, with a lower limit of quantification (LLOQ) of 0.1 ng x mL(-1), demonstrated good linearity over a range of 0.1 - 30 ng x mL(-1) of olanzapine. Specificity, linearity, accuracy, precision, recovery, matrix effect and stability were evaluated during method validation. The validated method was successfully applied to analyzing human plasma samples in bioavailability study.
High Precision Piezoelectric Linear Motors for Operations at Cryogenic Temperatures and Vacuum
NASA Technical Reports Server (NTRS)
Wong, D.; Carman, G.; Stam, M.; Bar-Cohen, Y.; Sen, A.; Henry, P.; Bearman, G.; Moacanin, J.
1995-01-01
The use of an electromechanical device for optically positioning a mirror system during the pre-project phase of the Pluto Fast Flyby mission was evaluated at JPL. The device under consideration was a piezoelectric driven linear motor functionally dependent upon a time varying electric field which induces displacements ranging from submicrons to millimeters with positioning accuracy within nanometers.
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
How genome complexity can explain the difficulty of aligning reads to genomes.
Phan, Vinhthuy; Gao, Shanshan; Tran, Quang; Vo, Nam S
2015-01-01
Although it is frequently observed that aligning short reads to genomes becomes harder if they contain complex repeat patterns, there has not been much effort to quantify the relationship between complexity of genomes and difficulty of short-read alignment. Existing measures of sequence complexity seem unsuitable for the understanding and quantification of this relationship. We investigated several measures of complexity and found that length-sensitive measures of complexity had the highest correlation to accuracy of alignment. In particular, the rate of distinct substrings of length k, where k is similar to the read length, correlated very highly to alignment performance in terms of precision and recall. We showed how to compute this measure efficiently in linear time, making it useful in practice to estimate quickly the difficulty of alignment for new genomes without having to align reads to them first. We showed how the length-sensitive measures could provide additional information for choosing aligners that would align consistently accurately on new genomes. We formally established a connection between genome complexity and the accuracy of short-read aligners. The relationship between genome complexity and alignment accuracy provides additional useful information for selecting suitable aligners for new genomes. Further, this work suggests that the complexity of genomes sometimes should be thought of in terms of specific computational problems, such as the alignment of short reads to genomes.
NASA Astrophysics Data System (ADS)
Liu, Haijian; Wu, Changshan
2018-06-01
Crown-level tree species classification is a challenging task due to the spectral similarity among different tree species. Shadow, underlying objects, and other materials within a crown may decrease the purity of extracted crown spectra and further reduce classification accuracy. To address this problem, an innovative pixel-weighting approach was developed for tree species classification at the crown level. The method utilized high density discrete LiDAR data for individual tree delineation and Airborne Imaging Spectrometer for Applications (AISA) hyperspectral imagery for pure crown-scale spectra extraction. Specifically, three steps were included: 1) individual tree identification using LiDAR data, 2) pixel-weighted representative crown spectra calculation using hyperspectral imagery, with which pixel-based illuminated-leaf fractions estimated using a linear spectral mixture analysis (LSMA) were employed as weighted factors, and 3) representative spectra based tree species classification was performed through applying a support vector machine (SVM) approach. Analysis of results suggests that the developed pixel-weighting approach (OA = 82.12%, Kc = 0.74) performed better than treetop-based (OA = 70.86%, Kc = 0.58) and pixel-majority methods (OA = 72.26, Kc = 0.62) in terms of classification accuracy. McNemar tests indicated the differences in accuracy between pixel-weighting and treetop-based approaches as well as that between pixel-weighting and pixel-majority approaches were statistically significant.
NASA Astrophysics Data System (ADS)
Naik, Deepak kumar; Maity, K. P.
2018-03-01
Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.
Assessment of the UV camera sulfur dioxide retrieval for point source plumes
Dalton, M.P.; Watson, I.M.; Nadeau, P.A.; Werner, C.; Morrow, W.; Shannon, J.M.
2009-01-01
Digital cameras, sensitive to specific regions of the ultra-violet (UV) spectrum, have been employed for quantifying sulfur dioxide (SO2) emissions in recent years. The instruments make use of the selective absorption of UV light by SO2 molecules to determine pathlength concentration. Many monitoring advantages are gained by using this technique, but the accuracy and limitations have not been thoroughly investigated. The effect of some user-controlled parameters, including image exposure duration, the diameter of the lens aperture, the frequency of calibration cell imaging, and the use of the single or paired bandpass filters, have not yet been addressed. In order to clarify methodological consequences and quantify accuracy, laboratory and field experiments were conducted. Images were collected of calibration cells under varying observational conditions, and our conclusions provide guidance for enhanced image collection. Results indicate that the calibration cell response is reliably linear below 1500 ppm m, but that the response is significantly affected by changing light conditions. Exposure durations that produced maximum image digital numbers above 32 500 counts can reduce noise in plume images. Sulfur dioxide retrieval results from a coal-fired power plant plume were compared to direct sampling measurements and the results indicate that the accuracy of the UV camera retrieval method is within the range of current spectrometric methods. ?? 2009 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei
2017-12-21
In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.
NASA Astrophysics Data System (ADS)
Yihaa Roodhiyah, Lisa’; Tjong, Tiffany; Nurhasan; Sutarno, D.
2018-04-01
The late research, linear matrices of vector finite element in two dimensional(2-D) magnetotelluric (MT) responses modeling was solved by non-sparse direct solver in TE mode. Nevertheless, there is some weakness which have to be improved especially accuracy in the low frequency (10-3 Hz-10-5 Hz) which is not achieved yet and high cost computation in dense mesh. In this work, the solver which is used is sparse direct solver instead of non-sparse direct solverto overcome the weaknesses of solving linear matrices of vector finite element metod using non-sparse direct solver. Sparse direct solver will be advantageous in solving linear matrices of vector finite element method because of the matrix properties which is symmetrical and sparse. The validation of sparse direct solver in solving linear matrices of vector finite element has been done for a homogen half-space model and vertical contact model by analytical solution. Thevalidation result of sparse direct solver in solving linear matrices of vector finite element shows that sparse direct solver is more stable than non-sparse direct solver in computing linear problem of vector finite element method especially in low frequency. In the end, the accuracy of 2D MT responses modelling in low frequency (10-3 Hz-10-5 Hz) has been reached out under the efficient allocation memory of array and less computational time consuming.
DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos
NASA Astrophysics Data System (ADS)
Carbone, Carmelita; Petkova, Margarita; Dolag, Klaus
2016-07-01
We present, for the first time in the literature, a full reconstruction of the total (linear and non-linear) ISW/Rees-Sciama effect in the presence of massive neutrinos, together with its cross-correlations with CMB-lensing and weak-lensing signals. The present analyses make use of all-sky maps extracted via ray-tracing across the gravitational potential distribution provided by the ``Dark Energy and Massive Neutrino Universe'' (DEMNUni) project, a set of large-volume, high-resolution cosmological N-body simulations, where neutrinos are treated as separate collisionless particles. We correctly recover, at 1-2% accuracy, the linear predictions from CAMB. Concerning the CMB-lensing and weak-lensing signals, we also recover, with similar accuracy, the signal predicted by Boltzmann codes, once non-linear neutrino corrections to HALOFIT are accounted for. Interestingly, in the ISW/Rees-Sciama signal, and its cross correlation with lensing, we find an excess of power with respect to the massless case, due to free streaming neutrinos, roughly at the transition scale between the linear and non-linear regimes. The excess is ~ 5 - 10% at l ~ 100 for the ISW/Rees-Sciama auto power spectrum, depending on the total neutrino mass Mν, and becomes a factor of ~ 4 for Mν = 0.3 eV, at l ~ 600, for the ISW/Rees-Sciama cross power with CMB-lensing. This effect should be taken into account for the correct estimation of the CMB temperature bispectrum in the presence of massive neutrinos.
Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui
2016-01-01
Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were found on geometric measurement between NewTom VG 0.15 mm and NewTom VG 0.30 mm groups (P = 0.999) while a significant difference was found between VATECH DCTPRO 0.30 mm and NewTom VG 0.30 mm groups (P = 0.006). Conclusions: The 3D reconstruction from CBCT data can achieve a high linear, volumetric, and geometric accuracy. Increasing voxel resolution from 0.30 to 0.15 mm does not result in increased accuracy of 3D tooth reconstruction while different systems can affect the accuracy. PMID:27270544
Vathsangam, Harshvardhan; Emken, Adar; Schroeder, E. Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S.
2011-01-01
This paper describes an experimental study in estimating energy expenditure from treadmill walking using a single hip-mounted triaxial inertial sensor comprised of a triaxial accelerometer and a triaxial gyroscope. Typical physical activity characterization using accelerometer generated counts suffers from two drawbacks - imprecison (due to proprietary counts) and incompleteness (due to incomplete movement description). We address these problems in the context of steady state walking by directly estimating energy expenditure with data from a hip-mounted inertial sensor. We represent the cyclic nature of walking with a Fourier transform of sensor streams and show how one can map this representation to energy expenditure (as measured by V O2 consumption, mL/min) using three regression techniques - Least Squares Regression (LSR), Bayesian Linear Regression (BLR) and Gaussian Process Regression (GPR). We perform a comparative analysis of the accuracy of sensor streams in predicting energy expenditure (measured by RMS prediction accuracy). Triaxial information is more accurate than uniaxial information. LSR based approaches are prone to outlier sensitivity and overfitting. Gyroscopic information showed equivalent if not better prediction accuracy as compared to accelerometers. Combining accelerometer and gyroscopic information provided better accuracy than using either sensor alone. We also analyze the best algorithmic approach among linear and nonlinear methods as measured by RMS prediction accuracy and run time. Nonlinear regression methods showed better prediction accuracy but required an order of magnitude of run time. This paper emphasizes the role of probabilistic techniques in conjunction with joint modeling of triaxial accelerations and rotational rates to improve energy expenditure prediction for steady-state treadmill walking. PMID:21690001
Accuracy of Carotid Duplex Criteria in Diagnosis of Significant Carotid Stenosis in Asian Patients.
Dharmasaroja, Pornpatr A; Uransilp, Nattaphol; Watcharakorn, Arvemas; Piyabhan, Pritsana
2018-03-01
Extracranial carotid stenosis can be diagnosed by velocity criteria of carotid duplex. Whether they are accurately applied to define severity of internal carotid artery (ICA) stenosis in Asian patients needs to be proved. The purpose of this study was to evaluate the accuracy of 2 carotid duplex velocity criteria in defining significant carotid stenosis. Carotid duplex studies and magnetic resonance angiography were reviewed. Criteria 1 was recommended by the Society of Radiologists in Ultrasound; moderate stenosis (50%-69%): peak systolic velocity (PSV) 125-230 cm/s, diastolic velocity (DV) 40-100 cm/s; severe stenosis (>70%): PSV greater than 230 cm/s, DV greater than 100 cm/s. Criteria 2 used PSV greater than 140 cm/s, DV less than 110 cm/s to define moderate stenosis (50%-75%) and PSV greater than 140 cm/s, DV greater than 110 cm/s for severe stenosis (76%-95%). A total of 854 ICA segments were reviewed. There was moderate stenosis in 72 ICAs, severe stenosis in 50 ICAs, and occlusion in 78 ICAs. Criteria 2 had slightly lower sensitivity, whereas higher specificity and accuracy than criteria 1 were observed in detecting moderate stenosis (criteria 1: sensitivity 95%, specificity 83%, accuracy 84%; criteria 2: sensitivity 92%, specificity 92%, and accuracy 92%). However, in detection of severe ICA stenosis, no significant difference in sensitivity, specificity, and accuracy was found (criteria 1: sensitivity 82%, specificity 99.57%, accuracy 98%; criteria 2: sensitivity 86%, specificity 99.68%, and accuracy 99%). In the subgroup of moderate stenosis, the criteria using ICA PSV greater than 140 cm/s had higher specificity and accuracy than the criteria using ICA PSV 125-230 cm/s. However, there was no significant difference in detection of severe stenosis or occlusion of ICA. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Integrated control design for driver assistance systems based on LPV methods
NASA Astrophysics Data System (ADS)
Gáspár, Péter; Németh, Balázs
2016-12-01
The paper proposes a control design method for a driver assistance system. In the operation of the system, a predefined trajectory required by the driver with a steering command is followed. During manoeuvres the control system generates differential brake moment and the auxiliary front-wheel steering angle and changes the camber angles of the wheels in order to improve the tracking of the road trajectory. The performance specifications are guaranteed by the local controllers, i.e. the brake, the steering, and the suspension systems, while the coordination of these components is provided by the supervisor. The advantage of this architecture is that local controllers are designed independently, which is ensured by the fact that the monitoring signals are taken into consideration in the formalisation of their performance specifications. The fault-tolerant control can be achieved by incorporating the detected fault signals in their performance specifications. The control system also uses a driver model, with which the reference signal can be generated. In the control design, the parameter-dependent linear parameter-varyingmethod, which meets the performance specifications, is used. The operation of the control system is illustrated through different normal and emergency vehicle manoeuvres with a high-accuracy simulation software.
Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.
Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M
2016-03-11
Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.
Mishra, Shikha; Aeri, Vidhu
2017-12-01
Saraca asoca Linn. (Caesalpiniaceae) is an important traditional remedy for gynaecological disorders and it contains lyoniside, an aryl tetralin lignan glycoside. The aglycone of lyoniside, lyoniresinol possesses structural similarity to enterolignan precursors which are established phytoestrogens. This work illustrates biotransformation of lyoniside to lyoniresinol using Woodfordia fruticosa Kurz. (Lythraceae) flowers and simultaneous quantification of lyoniside and lyoniresinol using a validated HPTLC method. The aqueous extract prepared from S. asoca bark was fermented using W. fruticosa flowers. The substrate and fermented product both were simultaneously analyzed using solvent system:toluene:ethyl acetate:formic acid (4:3:0.4) at 254 nm. The method was validated for specificity, accuracy, precision, linearity, sensitivity and robustness as per ICH guidelines. The substrate showed the presence of lyoniside, however, it decreased as the fermentation proceeded. On 3rd day, lyoniresinol starts appearing in the medium. In 8 days duration most of the lyoniside converted to lyoniresinol. The developed method was specific for lyoniside and lyoniresinol. Lyoniside and lyoniresinol showed linearity in the range of 250-3000 and 500-2500 ng. The method was accurate as resulted in 99.84% and 99.83% recovery, respectively, for lyoniside and lyoniresinol. Aryl tetralin lignan glycoside, lyoniside was successfully transformed into lyoniresinol using W. fruticosa flowers and their contents were simultaneously analyzed using developed validated HPTLC method.
Nazare, P; Massaroti, P; Duarte, L F; Campos, D R; Marchioretto, M A M; Bernasconi, G; Calafatti, S; Barros, F A P; Meurer, E C; Pedrazzoli, J; Moraes, L A B
2005-09-01
A simple, sensitive and specific liquid chromatography-tandem mass spectrometry method for the quantification of bromopride I in human plasma is presented. Sample preparation consisted of the addition of procainamide II as the internal standard, liquid-liquid extraction in alkaline conditions using hexane-ethyl acetate (1 : 1, v/v) as the extracting solvent, followed by centrifugation, evaporation of the solvent and sample reconstitution in acetonitrile. Both I and II (internal standard, IS) were analyzed using a C18 column and the mobile-phase acetonitrile-water (formic acid 0.1%). The eluted compounds were monitored using electrospray tandem mass spectrometry. The analyses were carried out by multiple reaction monitoring (MRM) using the parent-to-daughter combinations of m/z 344.20 > 271.00 and m/z 236.30 > 163.10. The areas of peaks from analyte and IS were used for quantification of I. The achieved limit of quantification was 1.0 ng/ml and the assay exhibited a linear dynamic range of 1-100.0 ng/ml and gave a correlation coefficient (r) of 0.995 or better. Validation results on linearity, specificity, accuracy, precision and stability, as well as application to the analysis of samples taken up to 24 h after oral administration of 10 mg of I in healthy volunteers demonstrated the applicability to bioequivalence studies.
Resolution performance of a 0.60-NA, 364-nm laser direct writer
NASA Astrophysics Data System (ADS)
Allen, Paul C.; Buck, Peter D.
1990-06-01
ATEQ has developed a high resolution laser scanning printing engine based on the 8 beam architecture of the CORE- 2000. This printing engine has been incorporated into two systems: the CORE-2500 for the production of advanced masks and reticles and a prototype system for direct write on wafers. The laser direct writer incorporates a through-the-lens alignment system and a rotary chuck for theta alignment. Its resolution performance is delivered by a 0. 60 NA laser scan lens and a novel air-jet focus system. The short focal length high resolution lens also reduces beam position errors thereby improving overall pattern accuracy. In order to take advantage of the high NA optics a high performance focus servo was developed capable of dynamic focus with a maximum error of 0. 15 tm. The focus system uses a hot wire anemometer to measure air flow through an orifice abutting the wafer providing a direct measurement to the top surface of resist independent of substrate properties. Lens specifications are presented and compared with the previous design. Bench data of spot size vs. entrance pupil filling show spot size performance down to 0. 35 m FWHM. The lens has a linearity specification of 0. 05 m system measurements of lens linearity indicate system performance substantially below this. The aerial image of the scanned beams is measured using resist as a threshold detector. An effective spot size is
RP-HPLC ANALYSIS OF ACIDIC AND BASIC DRUGS IN SYSTEMS WITH DIETHYLAMINE AS ELUENTS ADDITIVE.
Petruczynik, Anna; Wroblewski, Karol; Strozek, Szymon; Waksmundzka-Hajnos, Monika
2016-11-01
The chromatographic behavior of some basic and acidic drugs was studied on Cl 8, Phenyl-Hexyl and Polar RP columns with methanol or acetonitrile as organic modifiers of aqueous mobile phases containing addition of diethylamine. Diethylamine plays a double function of silanol blocker reagent in analysis of basic drugs and ion-pair reagent in analysis of acidic drugs. Most symmetrical peaks and highest system efficiency were obtained on Phenyl-Hexyl and Polar RP columns in tested mobile phase systems compared to results obtained on C18 column. A new rapid, simple, specific and accurate reverse phase liquid chromatographic method was developed for the simultaneous determination of atorvastatin - antihyperlipidemic drug and amlodipine - calcium channel blocker in one pharmaceutical formulation. Atorvastatin is an acidic compounds while amlodipine is a basic substance. The chromatographic separation was carried out on Phenyl-Hexyl column by gradient elution mode with acetonitrile as organic modifier, acetate buffer at pH 3.5 and Q.025 M/L diethylamine. The proposed method was validated for specificity, precision, accuracy, linearity, and robustness. The linearity range of atorvastatin and amlodipine for 5 - 100 μg/mL was obtained with limits of-detection (LOD) 3.2750 gg/mL and 3.2102 μg/mL, respectively. The proposed method made use of DAD as a tool for peak identity and purity confirmation.
Pezo, Davinson; Navascués, Beatriz; Salafranca, Jesús; Nerín, Cristina
2012-10-01
Ethyl Lauroyl Arginate (LAE) is a cationic tensoactive compound, soluble in water, with a wide activity spectrum against moulds and bacteria. LAE has been incorporated as antimicrobial agent into packaging materials for food contact and these materials require to comply with the specific migration criteria. In this paper, one analytical procedure has been developed and optimized for the analysis of LAE in food simulants after the migrations tests. It consists of the formation of an ionic pair between LAE and the inorganic complex Co(SCN)(4)(2-) in aqueous solution, followed by a liquid-liquid extraction in a suitable organic solvent and further UV-Vis absorbance measurement. In order to evaluate possible interferences, the ionic pair has been also analyzed by high performance liquid chromatography with UV-Vis detection. Both procedures provided similar analytical characteristics, with linear ranges from 1.10 to 25.00 mg kg(-1), linearity higher than 0.9886, limits of detection and quantification of 0.33 and 1.10 mg kg(-1), respectively, accuracy better than 1% as relative error and precision better than 3.6% expressed as RSD. Optimization of analytical techniques, thermal and chemical stability of LAE, as well as migration kinetics of LAE from experimental active packaging are reported and discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Pujeri, Sudhakar S.; Khader, Addagadde M. A.; Seetharamappa, Jaldappagari
2012-01-01
A simple, rapid and stability-indicating reversed-phase liquid chromatographic method was developed for the assay of varenicline tartrate (VRT) in the presence of its degradation products generated from forced decomposition studies. The HPLC separation was achieved on a C18 Inertsil column (250 mm × 4.6 mm i.d. particle size is 5 μm) employing a mobile phase consisting of ammonium acetate buffer containing trifluoroacetic acid (0.02M; pH 4) and acetonitrile in gradient program mode with a flow rate of 1.0 mL min−1. The UV detector was operated at 237 nm while column temperature was maintained at 40 °C. The developed method was validated as per ICH guidelines with respect to specificity, linearity, precision, accuracy, robustness and limit of quantification. The method was found to be simple, specific, precise and accurate. Selectivity of the proposed method was validated by subjecting the stock solution of VRT to acidic, basic, photolysis, oxidative and thermal degradation. The calibration curve was found to be linear in the concentration range of 0.1–192 μg mL−1 (R2 = 0.9994). The peaks of degradation products did not interfere with that of pure VRT. The utility of the developed method was examined by analyzing the tablets containing VRT. The results of analysis were subjected to statistical analysis. PMID:22396908