Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Chai, Rui; Xu, Li-Sheng; Yao, Yang; Hao, Li-Ling; Qi, Lin
2017-01-01
This study analyzed ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO), and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. Invasively measured parameters were compared with parameters measured from brachial pulse waves by regression model and transfer function model. Accuracy of parameters estimated by regression and transfer function model, was compared too. Findings showed that k value, central pulse wave and brachial pulse wave parameters invasively measured, correlated positively. Regression model parameters including A_slope, DBP, SEVR, and transfer function model parameters had good consistency with parameters invasively measured. They had same effect of consistency. SBP, PP, SV, and CO could be calculated through the regression model, but their accuracies were worse than that of transfer function model.
Doubly stochastic radial basis function methods
NASA Astrophysics Data System (ADS)
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Chai Rui; Li Si-Man; Xu Li-Sheng; Yao Yang; Hao Li-Ling
2017-07-01
This study mainly analyzed the parameters such as ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO) and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. These parameters extracted from the central pulse wave invasively measured were compared with the parameters measured from the brachial pulse waves by a regression model and a transfer function model. The accuracy of the parameters which were estimated by the regression model and the transfer function model was compared too. Our findings showed that in addition to the k value, the above parameters of the central pulse wave and the brachial pulse wave invasively measured had positive correlation. Both the regression model parameters including A_slope, DBP, SEVR and the transfer function model parameters had good consistency with the parameters invasively measured, and they had the same effect of consistency. The regression equations of the three parameters were expressed by Y'=a+bx. The SBP, PP, SV, CO of central pulse wave could be calculated through the regression model, but their accuracies were worse than that of transfer function model.
Relaxed Fidelity CFD Methods Applied to Store Separation Problems
2004-06-01
accuracy-productivity characteristics of influence function methods and time-accurate CFD methods. Two methods are presented in this paper, both of...which provide significant accuracy improvements over influence function methods while providing rapid enough turn around times to support parameter and
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
Identification of optimal soil hydraulic functions and parameters for predicting soil moisture
We examined the accuracy of several commonly used soil hydraulic functions and associated parameters for predicting observed soil moisture data. We used six combined methods formed by three commonly used soil hydraulic functions – i.e., Brooks and Corey (1964) (BC), Campbell (19...
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
Automatic classification of protein structures using physicochemical parameters.
Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam
2014-09-01
Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G
2015-11-01
Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.
Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J
2016-10-01
To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Representing Functions in n Dimensions to Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2007-01-01
A method of approximating a scalar function of n independent variables (where n is a positive integer) to arbitrary accuracy has been developed. This method is expected to be attractive for use in engineering computations in which it is necessary to link global models with local ones or in which it is necessary to interpolate noiseless tabular data that have been computed from analytic functions or numerical models in n-dimensional spaces of design parameters.
NASA Astrophysics Data System (ADS)
Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue
2017-08-01
On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
Pirich, Christian; Keinrath, Peter; Barth, Gabriele; Rendl, Gundula; Rettenbacher, Lukas; Rodrigues, Margarida
2017-03-01
IQ SPECT consists of a new pinhole-like collimator, cardio-centric acquisition, and advanced 3D iterative SPECT reconstruction. The aim of this paper was to compare diagnostic accuracy and functional parameters obtained with IQ SPECT versus conventional SPECT in patients undergoing myocardial perfusion scintigraphy with adenosine stress and at rest. Eight patients with known or suspected coronary artery disease underwent [99mTc] tetrofosmin gated SPECT. Acquisition was performed on a Symbia T6 equipped with IQ SPECT and on a conventional gamma camera system. Gated SPECT data were used to calculate functional parameters. Scores analysis was performed on a 17-segment model. Coronary angiography and clinical follow-up were considered as diagnostic reference standard. Mean acquisition time was 4 minutes with IQ SPECT and 21 minutes with conventional SPECT. Agreement degree on the diagnostic accuracy between both systems was 0.97 for stress studies, 0.91 for rest studies and 0.96 for both studies. Perfusion abnormalities scores obtained by using IQ SPECT and conventional SPECT were not significant different: SSS, 9.7±8.8 and 10.1±6.4; SRS, 7.1±6.1 and 7.5±7.3; SDS, 4.0±6.1 and 3.9±4.3, respectively. However, a significant difference was found in functional parameters derived from IQ SPECT and conventional SPECT both after stress and at rest. Mean LVEF was 8% lower using IQ SPECT. Differences in LVEF were found in patients with normal LVEF and patients with reduced LVEF. Functional parameters using accelerated cardiac acquisition with IQ SPECT are significantly different to those obtained with conventional SPECT, while agreement for clinical interpretation of myocardial perfusion scintigraphy with both techniques is high.
Mannarini, Stefania; Boffo, Marilisa
2014-01-01
The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT. PMID:25000406
Mannarini, Stefania; Boffo, Marilisa
2014-01-01
The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT.
Delay functions in trip assignment for transport planning process
NASA Astrophysics Data System (ADS)
Leong, Lee Vien
2017-10-01
In transportation planning process, volume-delay and turn-penalty functions are the functions needed in traffic assignment to determine travel time on road network links. Volume-delay function is the delay function describing speed-flow relationship while turn-penalty function is the delay function associated to making a turn at intersection. The volume-delay function used in this study is the revised Bureau of Public Roads (BPR) function with the constant parameters, α and β values of 0.8298 and 3.361 while the turn-penalty functions for signalized intersection were developed based on uniform, random and overflow delay models. Parameters such as green time, cycle time and saturation flow were used in the development of turn-penalty functions. In order to assess the accuracy of the delay functions, road network in areas of Nibong Tebal, Penang and Parit Buntar, Perak was developed and modelled using transportation demand forecasting software. In order to calibrate the models, phase times and traffic volumes at fourteen signalised intersections within the study area were collected during morning and evening peak hours. The prediction of assigned volumes using the revised BPR function and the developed turn-penalty functions show close agreement to actual recorded traffic volume with the lowest percentage of accuracy, 80.08% and the highest, 93.04% for the morning peak model. As for the evening peak model, they were 75.59% and 95.33% respectively for lowest and highest percentage of accuracy. As for the yield left-turn lanes, the lowest percentage of accuracy obtained for the morning and evening peak models were 60.94% and 69.74% respectively while the highest percentage of accuracy obtained for both models were 100%. Therefore, can be concluded that the development and utilisation of delay functions based on local road conditions are important as localised delay functions can produce better estimate of link travel times and hence better planning for future scenarios.
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong
2017-11-20
A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.
Meshless Solution of the Problem on the Static Behavior of Thin and Thick Laminated Composite Beams
NASA Astrophysics Data System (ADS)
Xiang, S.; Kang, G. W.
2018-03-01
For the first time, the static behavior of laminated composite beams is analyzed using the meshless collocation method based on a thin-plate-spline radial basis function. In the approximation of a partial differential equation by using a radial basis function, the shape parameter has an important role in ensuring the numerical accuracy. The choice of a shape parameter in the thin plate spline radial basis function is easier than in other radial basis functions. The governing differential equations are derived based on Reddy's third-order shear deformation theory. Numerical results are obtained for symmetric cross-ply laminated composite beams with simple-simple and cantilever boundary conditions under a uniform load. The results found are compared with available published ones and demonstrate the accuracy of the present method.
Left atrial strain predicts hemodynamic parameters in cardiovascular patients.
Hewing, Bernd; Theres, Lena; Spethmann, Sebastian; Stangl, Karl; Dreger, Henryk; Knebel, Fabian
2017-08-01
We aimed to evaluate the predictive value of left atrial (LA) reservoir, conduit, and contractile function parameters as assessed by speckle tracking echocardiography (STE) for invasively measured hemodynamic parameters in a patient cohort with myocardial and valvular diseases. Sixty-nine patients undergoing invasive hemodynamic assessment were enrolled into the study. Invasive hemodynamic parameters were obtained by left and right heart catheterization. Transthoracic echocardiography assessment of LA reservoir, conduit, and contractile function was performed by STE. Forty-nine patients had sinus rhythm (SR) and 20 patients had permanent atrial fibrillation (AF). AF patients had significantly reduced LA reservoir function compared to SR patients. In patients with SR, LA reservoir, conduit, and contractile function inversely correlated with pulmonary capillary wedge pressure (PCWP), left ventricular end-diastolic pressure, and mean pulmonary artery pressure (PAP), and showed a moderate association with cardiac index. In AF patients, there were no significant correlations between LA reservoir function and invasively obtained hemodynamic parameters. In SR patients, LA contractile function with a cutoff value of 16.0% had the highest diagnostic accuracy (area under the curve, AUC: 0.895) to predict PCWP ≥18 mm Hg compared to the weaker diagnostic accuracy of average E/E' ratio with an AUC of 0.786 at a cutoff value of 14.3. In multivariate analysis, LA contractile function remained significantly associated with PCWP ≥18 mm Hg. In a cohort of patients with a broad spectrum of cardiovascular diseases LA strain shows a valuable prediction of hemodynamic parameters, specifically LV filling pressures, in the presence of SR. © 2017, Wiley Periodicals, Inc.
Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki
2014-05-21
In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ∼ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.
NASA Astrophysics Data System (ADS)
Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki
2014-05-01
In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ˜ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.
Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.
Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto
2010-05-01
A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.
NASA Technical Reports Server (NTRS)
Rizk, Magdi H.
1988-01-01
A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.
On the accuracy of the Padé-resummed master equation approach to dissipative quantum dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-Ta; Reichman, David R.; Berkelbach, Timothy C.
2016-04-21
Well-defined criteria are proposed for assessing the accuracy of quantum master equations whose memory functions are approximated by Padé resummation of the first two moments in the electronic coupling. These criteria partition the parameter space into distinct levels of expected accuracy, ranging from quantitatively accurate regimes to regions of parameter space where the approach is not expected to be applicable. Extensive comparison of Padé-resummed master equations with numerically exact results in the context of the spin–boson model demonstrates that the proposed criteria correctly demarcate the regions of parameter space where the Padé approximation is reliable. The applicability analysis we presentmore » is not confined to the specifics of the Hamiltonian under consideration and should provide guidelines for other classes of resummation techniques.« less
SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.
SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2013-01-01
We performed comparative studies to establish favorable spectral regions and measurement wavelength combinations in alternative bands of CO2 and O2, for the sensing of CO2 mixing ratios (XCO2) in missions such as ASCENDS. The analysis employed several simulation approaches including separate layers calculations based on pre-analyzed atmospheric data from the modern-era retrospective analysis for research and applications (MERRA), and the line-byline radiative transfer model (LBLRTM) to obtain achievable accuracy estimates as a function of altitude and for the total path over an annual span of variations in atmospheric parameters. Separate layer error estimates also allowed investigation of the uncertainties in the weighting functions at varying altitudes and atmospheric conditions. The parameters influencing the measurement accuracy were analyzed independently and included temperature sensitivity, water vapor interferences, selection of favorable weighting functions, excitations wavelength stabilities and other factors. The results were used to identify favorable spectral regions and combinations of on / off line wavelengths leading to reductions in interferences and the improved total accuracy.
Assessment of craniometric traits in South Indian dry skulls for sex determination.
Ramamoorthy, Balakrishnan; Pai, Mangala M; Prabhu, Latha V; Muralimanju, B V; Rai, Rajalakshmi
2016-01-01
The skeleton plays an important role in sex determination in forensic anthropology. The skull bone is considered as the second best after the pelvic bone in sex determination due to its better retention of morphological features. Different populations have varying skeletal characteristics, making population specific analysis for sex determination essential. Hence the objective of this investigation is to obtain the accuracy of sex determination using cranial parameters of adult skulls to the highest percentage in South Indian population and to provide a baseline data for sex determination in South India. Seventy adult preserved human skulls were taken and based on the morphological traits were classified into 43 male skulls and 27 female skulls. A total of 26 craniometric parameters were studied. The data were analyzed by using the SPSS discriminant function. The analysis of stepwise, multivariate, and univariate discriminant function gave an accuracy of 77.1%, 85.7%, and 72.9% respectively. Multivariate direct discriminant function analysis classified skull bones into male and female with highest levels of accuracy. Using stepwise discriminant function analysis, the most dimorphic variable to determine sex of the skull, was biauricular breadth followed by weight. Subjecting the best dimorphic variables to univariate discriminant analysis, high levels of accuracy of sexual dimorphism was obtained. Percentage classification of high accuracies were obtained in this study indicating high level of sexual dimorphism in the crania, setting specific discriminant equations for the gender determination in South Indian people. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
USDA-ARS?s Scientific Manuscript database
Runoff travel time, which is a function of watershed and storm characteristics, is an important parameter affecting the prediction accuracy of hydrologic models. Although, time of concentration (tc) is a most widely used time parameter, it has multiple conceptual and computational definitions. Most ...
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
A method for cone fitting based on certain sampling strategy in CMM metrology
NASA Astrophysics Data System (ADS)
Zhang, Li; Guo, Chaopeng
2018-04-01
A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less
NASA Astrophysics Data System (ADS)
Martin, Bradley; Fornberg, Bengt
2017-04-01
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Nonparametric tests for equality of psychometric functions.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2017-12-07
Many empirical studies measure psychometric functions (curves describing how observers' performance varies with stimulus magnitude) because these functions capture the effects of experimental conditions. To assess these effects, parametric curves are often fitted to the data and comparisons are carried out by testing for equality of mean parameter estimates across conditions. This approach is parametric and, thus, vulnerable to violations of the implied assumptions. Furthermore, testing for equality of means of parameters may be misleading: Psychometric functions may vary meaningfully across conditions on an observer-by-observer basis with no effect on the mean values of the estimated parameters. Alternative approaches to assess equality of psychometric functions per se are thus needed. This paper compares three nonparametric tests that are applicable in all situations of interest: The existing generalized Mantel-Haenszel test, a generalization of the Berry-Mielke test that was developed here, and a split variant of the generalized Mantel-Haenszel test also developed here. Their statistical properties (accuracy and power) are studied via simulation and the results show that all tests are indistinguishable as to accuracy but they differ non-uniformly as to power. Empirical use of the tests is illustrated via analyses of published data sets and practical recommendations are given. The computer code in MATLAB and R to conduct these tests is available as Electronic Supplemental Material.
New more accurate calculations of the ground state potential energy surface of H(3) (+).
Pavanello, Michele; Tung, Wei-Cheng; Leonarski, Filip; Adamowicz, Ludwik
2009-02-21
Explicitly correlated Gaussian functions with floating centers have been employed to recalculate the ground state potential energy surface (PES) of the H(3) (+) ion with much higher accuracy than it was done before. The nonlinear parameters of the Gaussians (i.e., the exponents and the centers) have been variationally optimized with a procedure employing the analytical gradient of the energy with respect to these parameters. The basis sets for calculating new PES points were guessed from the points already calculated. This allowed us to considerably speed up the calculations and achieve very high accuracy of the results.
NASA Technical Reports Server (NTRS)
Thelen, Brian J.; Paxman, Richard G.
1994-01-01
The method of phase diversity has been used in the context of incoherent imaging to estimate jointly an object that is being imaged and phase aberrations induced by atmospheric turbulence. The method requires a parametric model for the phase-aberration function. Typically, the parameters are coefficients to a finite set of basis functions. Care must be taken in selecting a parameterization that properly balances accuracy in the representation of the phase-aberration function with stability in the estimates. It is well known that over parameterization can result in unstable estimates. Thus a certain amount of model mismatch is often desirable. We derive expressions that quantify the bias and variance in object and aberration estimates as a function of parameter dimension.
Torres, Edmanuel; DiLabio, Gino A
2013-08-13
Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.
a Gsa-Svm Hybrid System for Classification of Binary Problems
NASA Astrophysics Data System (ADS)
Sarafrazi, Soroor; Nezamabadi-pour, Hossein; Barahman, Mojgan
2011-06-01
This paperhybridizesgravitational search algorithm (GSA) with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve the classification accuracy in binary problems. GSA is an optimization heuristic toolused to optimize the value of SVM kernel parameter (in this paper, radial basis function (RBF) is chosen as the kernel function). The experimental results show that this newapproach can achieve high classification accuracy and is comparable to or better than the particle swarm optimization (PSO)-SVM and genetic algorithm (GA)-SVM, which are two hybrid systems for classification.
NASA Technical Reports Server (NTRS)
Slater, P. N. (Principal Investigator)
1980-01-01
The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Identifying aMCI with Functional Connectivity Network Characteristics based on Subtle AAL Atlas.
Zhuo, Zhizheng; Mo, Xiao; Ma, Xiangyu; Han, Ying; Li, Haiyun
2018-05-02
To investigate the subtle functional connectivity alterations of aMCI based on AAL atlas with 1024 regions (AAL_1024 atlas). Functional MRI images of 32 aMCI patients (Male/Female:15/17, Ages:66.8±8.36y) and 35 normal controls (Male/Female:13/22, Ages: 62.4±8.14y) were obtained in this study. Firstly, functional connectivity networks were constructed by Pearson's Correlation based on the subtle AAL_1024 atlas. Then, local and global network parameters were calculated from the thresholding functional connectivity matrices. Finally, multiple-comparison analysis was performed on these parameters to find the functional network alterations of aMCI. And furtherly, a couple of classifiers were adopted to identify the aMCI by using the network parameters. More subtle local brain functional alterations were detected by using AAL_1024 atlas. And the predominate nodes including hippocampus, inferior temporal gyrus, inferior parietal gyrus were identified which was not detected by AAL_90 atlas. The identification of aMCI from normal controls were significantly improved with the highest accuracy (98.51%), sensitivity (100%) and specificity (97.14%) compared to those (88.06%, 84.38% and 91.43% for the highest accuracy, sensitivity and specificity respectively) obtained by using AAL_90 atlas. More subtle functional connectivity alterations of aMCI could be found based on AAL_1024 atlas than those based on AAL_90 atlas. Besides, the identification of aMCI could also be improved. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Post, Anouk L.; Zhang, Xu; Bosschaart, Nienke; Van Leeuwen, Ton G.; Sterenborg, Henricus J. C. M.; Faber, Dirk J.
2016-03-01
Both Optical Coherence Tomography (OCT) and Single Fiber Reflectance Spectroscopy (SFR) are used to determine various optical properties of tissue. We developed a method combining these two techniques to measure the scattering anisotropy (g1) and γ (=1-g2/1-g1), related to the 1st and 2nd order moments of the phase function. The phase function is intimately associated with the cellular organization and ultrastructure of tissue, physical parameters that may change during disease onset and progression. Quantification of these parameters may therefore allow for improved non-invasive, in vivo discrimination between healthy and diseased tissue. With SFR the reduced scattering coefficient and γ can be extracted from the reflectance spectrum (Kanick et al., Biomedical Optics Express 2(6), 2011). With OCT the scattering coefficient can be extracted from the signal as a function of depth (Faber et al., Optics Express 12(19), 2004). Consequently, by combining SFR and OCT measurements at the same wavelengths, the scattering anisotropy (g) can be resolved using µs'= µs*(1-g). We performed measurements on a suspension of silica spheres as a proof of principle. The SFR model for the reflectance as a function of the reduced scattering coefficient and γ is based on semi-empirical modelling. These models feature Monte-Carlo (MC) based model constants. The validity of these constants - and thus the accuracy of the estimated parameters - depends on the phase function employed in the MC simulations. Since the phase function is not known when measuring in tissue, we will investigate the influence of assuming an incorrect phase function on the accuracy of the derived parameters.
Wang, Miaomiao; Li, Bofeng
2016-01-01
An empirical tropospheric delay model, together with a mapping function, is commonly used to correct the tropospheric errors in global navigation satellite system (GNSS) processing. As is well-known, the accuracy of tropospheric delay models relies mainly on the correction efficiency for tropospheric wet delays. In this paper, we evaluate the accuracy of three tropospheric delay models, together with five mapping functions in wet delays calculation. The evaluations are conducted by comparing their slant wet delays with those measured by water vapor radiometer based on its satellite-tracking function (collected data with large liquid water path is removed). For all 15 combinations of three tropospheric models and five mapping functions, their accuracies as a function of elevation are statistically analyzed by using nine-day data in two scenarios, with and without meteorological data. The results show that (1) no matter with or without meteorological data, there is no practical difference between mapping functions, i.e., Chao, Ifadis, Vienna Mapping Function 1 (VMF1), Niell Mapping Function (NMF), and MTT Mapping Function (MTT); (2) without meteorological data, the UNB3 is much better than Saastamoinen and Hopfield models, while the Saastamoinen model performed slightly better than the Hopfield model; (3) with meteorological data, the accuracies of all three tropospheric delay models are improved to be comparable, especially for lower elevations. In addition, the kinematic precise point positioning where no parameter is set up for tropospheric delay modification is conducted to further evaluate the performance of tropospheric delay models in positioning accuracy. It is shown that the UNB3 model is best and can achieve about 10 cm accuracy for the N and E coordinate component while 20 cm accuracy for the U coordinate component no matter the meteorological data is available or not. This accuracy can be obtained by the Saastamoinen model only when meteorological data is available, and degraded to 46 cm for the U component if the meteorological data is not available. PMID:26848662
Cognitive-motivational deficits in ADHD: development of a classification system.
Gupta, Rashmi; Kar, Bhoomika R; Srinivasan, Narayanan
2011-01-01
The classification systems developed so far to detect attention deficit/hyperactivity disorder (ADHD) do not have high sensitivity and specificity. We have developed a classification system based on several neuropsychological tests that measure cognitive-motivational functions that are specifically impaired in ADHD children. A total of 240 (120 ADHD children and 120 healthy controls) children in the age range of 6-9 years and 32 Oppositional Defiant Disorder (ODD) children (aged 9 years) participated in the study. Stop-Signal, Task-Switching, Attentional Network, and Choice Delay tests were administered to all the participants. Receiver operating characteristic (ROC) analysis indicated that percentage choice of long-delay reward best classified the ADHD children from healthy controls. Single parameters were not helpful in making a differential classification of ADHD with ODD. Multinominal logistic regression (MLR) was performed with multiple parameters (data fusion) that produced improved overall classification accuracy. A combination of stop-signal reaction time, posterror-slowing, mean delay, switch cost, and percentage choice of long-delay reward produced an overall classification accuracy of 97.8%; with internal validation, the overall accuracy was 92.2%. Combining parameters from different tests of control functions not only enabled us to accurately classify ADHD children from healthy controls but also in making a differential classification with ODD. These results have implications for the theories of ADHD.
A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems
NASA Astrophysics Data System (ADS)
Liu, Zuolin; Xu, Jian
2018-04-01
In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.
Hyperspectral recognition of processing tomato early blight based on GA and SVM
NASA Astrophysics Data System (ADS)
Yin, Xiaojun; Zhao, SiFeng
2013-03-01
Processing tomato early blight seriously affect the yield and quality of its.Determine the leaves spectrum of different disease severity level of processing tomato early blight.We take the sensitive bands of processing tomato early blight as support vector machine input vector.Through the genetic algorithm(GA) to optimize the parameters of SVM, We could recognize different disease severity level of processing tomato early blight.The result show:the sensitive bands of different disease severity levels of processing tomato early blight is 628-643nm and 689-692nm.The sensitive bands are as the GA and SVM input vector.We get the best penalty parameters is 0.129 and kernel function parameters is 3.479.We make classification training and testing by polynomial nuclear,radial basis function nuclear,Sigmoid nuclear.The best classification model is the radial basis function nuclear of SVM. Training accuracy is 84.615%,Testing accuracy is 80.681%.It is combined GA and SVM to achieve multi-classification of processing tomato early blight.It is provided the technical support of prediction processing tomato early blight occurrence, development and diffusion rule in large areas.
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.
Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments
Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843
Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun
2018-05-01
To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment
NASA Technical Reports Server (NTRS)
Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.
Optimization of multilayer neural network parameters for speaker recognition
NASA Astrophysics Data System (ADS)
Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka
2016-05-01
This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.
Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard
2013-05-01
Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html
NASA Astrophysics Data System (ADS)
Khachaturian, A. B.; Nekrasov, A. V.; Bogachev, M. I.
2018-05-01
The authors report the results of the computer simulations of the performance and accuracy of the sea wind speed and direction retrieval. The analyzed measurements over the sea surface are made by the airborne microwave Doppler navigation system (DNS) with three Y-configured beams operated as a scatterometer enhancing its functionality. Single- and double-stage wind measurement procedures are proposed and recommendations for their implementation are described.
Xu, Xin; Goddard, William A
2004-03-02
We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee-Yang-Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee-Yang-Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA.
NASA Astrophysics Data System (ADS)
Xu, Xin; Goddard, William A., III
2004-03-01
We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee-Yang-Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee-Yang-Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA.
Xu, Xin; Goddard, William A.
2004-01-01
We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee–Yang–Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee–Yang–Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA. PMID:14981235
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
A LiDAR data-based camera self-calibration method
NASA Astrophysics Data System (ADS)
Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun
2018-07-01
To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function
Kayser, Bastian; Mansow-Model, Sebastian; Verrel, Julius; Paul, Friedemann; Brandt, Alexander U.; Schmitz-Hübsch, Tanja
2016-01-01
Background The introduction of low cost optical 3D motion tracking sensors provides new options for effective quantification of motor dysfunction. Objective The present study aimed to evaluate the Kinect V2 sensor against a gold standard motion capture system with respect to accuracy of tracked landmark movements and accuracy and repeatability of derived clinical parameters. Methods Nineteen healthy subjects were concurrently recorded with a Kinect V2 sensor and an optical motion tracking system (Vicon). Six different movement tasks were recorded with 3D full-body kinematics from both systems. Tasks included walking in different conditions, balance and adaptive postural control. After temporal and spatial alignment, agreement of movements signals was described by Pearson’s correlation coefficient and signal to noise ratios per dimension. From these movement signals, 45 clinical parameters were calculated, including ranges of motions, torso sway, movement velocities and cadence. Accuracy of parameters was described as absolute agreement, consistency agreement and limits of agreement. Intra-session reliability of 3 to 5 measurement repetitions was described as repeatability coefficient and standard error of measurement for each system. Results Accuracy of Kinect V2 landmark movements was moderate to excellent and depended on movement dimension, landmark location and performed task. Signal to noise ratio provided information about Kinect V2 landmark stability and indicated larger noise behaviour in feet and ankles. Most of the derived clinical parameters showed good to excellent absolute agreement (30 parameters showed ICC(3,1) > 0.7) and consistency (38 parameters showed r > 0.7) between both systems. Conclusion Given that this system is low-cost, portable and does not require any sensors to be attached to the body, it could provide numerous advantages when compared to established marker- or wearable sensor based system. The Kinect V2 has the potential to be used as a reliable and valid clinical measurement tool. PMID:27861541
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction
Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang
2016-01-01
Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638
Metal Standards for Waveguide Characterization of Materials
NASA Technical Reports Server (NTRS)
Lambert, Kevin M.; Kory, Carol L.
2009-01-01
Rectangular-waveguide inserts that are made of non-ferromagnetic metals and are sized and shaped to function as notch filters have been conceived as reference standards for use in the rectangular- waveguide method of characterizing materials with respect to such constitutive electromagnetic properties as permittivity and permeability. Such standards are needed for determining the accuracy of measurements used in the method, as described below. In this method, a specimen of a material to be characterized is cut to a prescribed size and shape and inserted in a rectangular- waveguide test fixture, wherein the specimen is irradiated with a known source signal and detectors are used to measure the signals reflected by, and transmitted through, the specimen. Scattering parameters [also known as "S" parameters (S11, S12, S21, and S22)] are computed from ratios between the transmitted and reflected signals and the source signal. Then the permeability and permittivity of the specimen material are derived from the scattering parameters. Theoretically, the technique for calculating the permeability and permittivity from the scattering parameters is exact, but the accuracy of the results depends on the accuracy of the measurements from which the scattering parameters are obtained. To determine whether the measurements are accurate, it is necessary to perform comparable measurements on reference standards, which are essentially specimens that have known scattering parameters. To be most useful, reference standards should provide the full range of scattering-parameter values that can be obtained from material specimens. Specifically, measurements of the backscattering parameter (S11) from no reflection to total reflection and of the forward-transmission parameter (S21) from no transmission to total transmission are needed. A reference standard that functions as a notch (band-stop) filter can satisfy this need because as the signal frequency is varied across the frequency range for which the filter is designed, the scattering parameters vary over the ranges of values between the extremes of total reflection and total transmission. A notch-filter reference standard in the form of a rectangular-waveguide insert that has a size and shape similar to that of a material specimen is advantageous because the measurement configuration used for the reference standard can be the same as that for a material specimen. Typically a specimen is a block of material that fills a waveguide cross-section but occupies only a small fraction of the length of the waveguide. A reference standard of the present type (see figure) is a metal block that fills part of a waveguide cross section and contains a slot, the long dimension of which can be chosen to tailor the notch frequency to a desired value. The scattering parameters and notch frequency can be estimated with high accuracy by use of commercially available electromagnetic-field-simulating software. The block can be fabricated to the requisite precision by wire electrical-discharge machining. In use, the accuracy of measurements is determined by comparison of (1) the scattering parameters calculated from the measurements with (2) the scattering parameters calculated by the aforementioned software.
A detector interferometric calibration experiment for high precision astrometry
NASA Astrophysics Data System (ADS)
Crouzier, A.; Malbet, F.; Henault, F.; Léger, A.; Cara, C.; LeDuigou, J. M.; Preis, O.; Kern, P.; Delboulbe, A.; Martin, G.; Feautrier, P.; Stadler, E.; Lafrasse, S.; Rochat, S.; Ketchazo, C.; Donati, M.; Doumayrou, E.; Lagage, P. O.; Shao, M.; Goullioud, R.; Nemati, B.; Zhai, C.; Behar, E.; Potin, S.; Saint-Pe, M.; Dupont, J.
2016-11-01
Context. Exoplanet science has made staggering progress in the last two decades, due to the relentless exploration of new detection methods and refinement of existing ones. Yet astrometry offers a unique and untapped potential of discovery of habitable-zone low-mass planets around all the solar-like stars of the solar neighborhood. To fulfill this goal, astrometry must be paired with high precision calibration of the detector. Aims: We present a way to calibrate a detector for high accuracy astrometry. An experimental testbed combining an astrometric simulator and an interferometric calibration system is used to validate both the hardware needed for the calibration and the signal processing methods. The objective is an accuracy of 5 × 10-6 pixel on the location of a Nyquist sampled polychromatic point spread function. Methods: The interferometric calibration system produced modulated Young fringes on the detector. The Young fringes were parametrized as products of time and space dependent functions, based on various pixel parameters. The minimization of function parameters was done iteratively, until convergence was obtained, revealing the pixel information needed for the calibration of astrometric measurements. Results: The calibration system yielded the pixel positions to an accuracy estimated at 4 × 10-4 pixel. After including the pixel position information, an astrometric accuracy of 6 × 10-5 pixel was obtained, for a PSF motion over more than five pixels. In the static mode (small jitter motion of less than 1 × 10-3 pixel), a photon noise limited precision of 3 × 10-5 pixel was reached.
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
NASA Astrophysics Data System (ADS)
Wösten, J. H. M.; Pachepsky, Ya. A.; Rawls, W. J.
2001-10-01
Water retention and hydraulic conductivity are crucial input parameters in any modelling study on water flow and solute transport in soils. Due to inherent temporal and spatial variability in these hydraulic characteristics, large numbers of samples are required to properly characterise areas of land. Hydraulic characteristics can be obtained from direct laboratory and field measurements. However, these measurements are time consuming which makes it costly to characterise an area of land. As an alternative, analysis of existing databases of measured soil hydraulic data may result in pedotransfer functions. In practise, these functions often prove to be good predictors for missing soil hydraulic characteristics. Examples are presented of different equations describing hydraulic characteristics and of pedotransfer functions used to predict parameters in these equations. Grouping of data prior to pedotransfer function development is discussed as well as the use of different soil properties as predictors. In addition to regression analysis, new techniques such as artificial neural networks, group methods of data handling, and classification and regression trees are increasingly being used for pedotransfer function development. Actual development of pedotransfer functions is demonstrated by describing a practical case study. Examples are presented of pedotransfer function for predicting other than hydraulic characteristics. Accuracy and reliability of pedotransfer functions are demonstrated and discussed. In this respect, functional evaluation of pedotransfer functions proves to be a good tool to assess the desired accuracy of a pedotransfer function for a specific application.
Investigation of parameters affecting treatment time in MRI-guided transurethral ultrasound therapy
NASA Astrophysics Data System (ADS)
N'Djin, W. A.; Burtnyk, M.; Chopra, R.; Bronskill, M. J.
2010-03-01
MRI-guided transurethral ultrasound therapy shows promise for minimally invasive treatment of localized prostate cancer. Real-time MR temperature feedback enables the 3D control of thermal therapy to define an accurate region within the prostate. Previous in-vivo canine studies showed the feasibility of this method using transurethral planar transducers. The aim of this simulation study was to reduce the procedure time, while maintaining treatment accuracy by investigating new combinations of treatment parameters. A numerical model was used to simulate a multi-element heating applicator rotating inside the urethra in 10 human prostates. Acoustic power and rotation rate were varied based on the feedback of the temperature in the prostate. Several parameters were investigated for improving the treatment time. Maximum acoustic power and rotation rate were optimized interdependently as a function of prostate radius and transducer operating frequency, while avoiding temperatures >90° C in the prostate. Other trials were performed on each parameter separately, with the other parameter fixed. The concept of using dual-frequency transducers was studied, using the fundamental frequency or the 3rd harmonic component depending on the prostate radius. The maximum acoustic power which could be used decreased as a function of the prostate radius and the frequency. Decreasing the frequency (9.7-3.0 MHz) or increasing the power (10-20 W.cm-2) led to treatment times shorter by up to 50% under appropriate conditions. Dual-frequency configurations, while helpful, tended to have less impact on treatment times. Treatment accuracy was maintained and critical adjacent tissues like the rectal wall remained protected. The interdependence between power and frequency may require integrating multi-parametric functions inside the controller for future optimizations. As a first approach, however, even slight modifications of key parameters can be sufficient to reduce treatment time.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Standard Reference Specimens in Quality Control of Engineering Surfaces
Song, J. F.; Vorburger, T. V.
1991-01-01
In the quality control of engineering surfaces, we aim to understand and maintain a good relationship between the manufacturing process and surface function. This is achieved by controlling the surface texture. The control process involves: 1) learning the functional parameters and their control values through controlled experiments or through a long history of production and use; 2) maintaining high accuracy and reproducibility with measurements not only of roughness calibration specimens but also of real engineering parts. In this paper, the characteristics, utilizations, and limitations of different classes of precision roughness calibration specimens are described. A measuring procedure of engineering surfaces, based on the calibration procedure of roughness specimens at NIST, is proposed. This procedure involves utilization of check specimens with waveform, wavelength, and other roughness parameters similar to functioning engineering surfaces. These check specimens would be certified under standardized reference measuring conditions, or by a reference instrument, and could be used for overall checking of the measuring procedure and for maintaining accuracy and agreement in engineering surface measurement. The concept of “surface texture design” is also suggested, which involves designing the engineering surface texture, the manufacturing process, and the quality control procedure to meet the optimal functional needs. PMID:28184115
Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.
Yamazaki, Keisuke; Kaji, Daisuke
2013-08-01
Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.
Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.
Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan
2013-02-01
A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.
Constrained Analysis of Fluorescence Anisotropy Decay:Application to Experimental Protein Dynamics
Feinstein, Efraim; Deikus, Gintaras; Rusinova, Elena; Rachofsky, Edward L.; Ross, J. B. Alexander; Laws, William R.
2003-01-01
Hydrodynamic properties as well as structural dynamics of proteins can be investigated by the well-established experimental method of fluorescence anisotropy decay. Successful use of this method depends on determination of the correct kinetic model, the extent of cross-correlation between parameters in the fitting function, and differences between the timescales of the depolarizing motions and the fluorophore's fluorescence lifetime. We have tested the utility of an independently measured steady-state anisotropy value as a constraint during data analysis to reduce parameter cross correlation and to increase the timescales over which anisotropy decay parameters can be recovered accurately for two calcium-binding proteins. Mutant rat F102W parvalbumin was used as a model system because its single tryptophan residue exhibits monoexponential fluorescence intensity and anisotropy decay kinetics. Cod parvalbumin, a protein with a single tryptophan residue that exhibits multiexponential fluorescence decay kinetics, was also examined as a more complex model. Anisotropy decays were measured for both proteins as a function of solution viscosity to vary hydrodynamic parameters. The use of the steady-state anisotropy as a constraint significantly improved the precision and accuracy of recovered parameters for both proteins, particularly for viscosities at which the protein's rotational correlation time was much longer than the fluorescence lifetime. Thus, basic hydrodynamic properties of larger biomolecules can now be determined with more precision and accuracy by fluorescence anisotropy decay. PMID:12524313
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
Predicting Earth orientation changes from global forecasts of atmosphere-hydrosphere dynamics
NASA Astrophysics Data System (ADS)
Dobslaw, Henryk; Dill, Robert
2018-02-01
Effective Angular Momentum (EAM) functions obtained from global numerical simulations of atmosphere, ocean, and land surface dynamics are routinely processed by the Earth System Modelling group at Deutsches GeoForschungsZentrum. EAM functions are available since January 1976 with up to 3 h temporal resolution. Additionally, 6 days-long EAM forecasts are routinely published every day. Based on hindcast experiments with 305 individual predictions distributed over 15 months, we demonstrate that EAM forecasts improve the prediction accuracy of the Earth Orientation Parameters at all forecast horizons between 1 and 6 days. At day 6, prediction accuracy improves down to 1.76 mas for the terrestrial pole offset, and 2.6 mas for Δ UT1, which correspond to an accuracy increase of about 41% over predictions published in Bulletin A by the International Earth Rotation and Reference System Service.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
NASA Astrophysics Data System (ADS)
Cannella, Marco; Sciuto, Salvatore Andrea
2001-04-01
An evaluation of errors for a method for determination of trajectories and velocities of supersonic objects is conducted. The analytical study of a cluster, composed of three pressure transducers and generally used as an apparatus for cinematic determination of parameters of supersonic objects, is developed. Furthermore, detailed investigation into the accuracy of this cluster on determination of the slope of an incoming shock wave is carried out for optimization of the device. In particular, a specific non-dimensional parameter is proposed in order to evaluate accuracies for various values of parameters and reference graphs are provided in order to properly design the sensor cluster. Finally, on the basis of the error analysis conducted, a discussion on the best estimation of the relative distance for the sensor as a function of temporal resolution of the measuring system is presented.
Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne
2016-01-01
Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
A computational model for biosonar echoes from foliage
Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao
2017-01-01
Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631
A computational model for biosonar echoes from foliage.
Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf
2017-01-01
Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-01-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations. PMID:25558365
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
In the formulations of earlier Displacement Transfer Functions for structure shape predictions, the surface strain distributions, along a strain-sensing line, were represented with piecewise linear functions. To improve the shape-prediction accuracies, Improved Displacement Transfer Functions were formulated using piecewise nonlinear strain representations. Through discretization of an embedded beam (depth-wise cross section of a structure along a strain-sensing line) into multiple small domains, piecewise nonlinear functions were used to describe the surface strain distributions along the discretized embedded beam. Such piecewise approach enabled the piecewise integrations of the embedded beam curvature equations to yield slope and deflection equations in recursive forms. The resulting Improved Displacement Transfer Functions, written in summation forms, were expressed in terms of beam geometrical parameters and surface strains along the strain-sensing line. By feeding the surface strains into the Improved Displacement Transfer Functions, structural deflections could be calculated at multiple points for mapping out the overall structural deformed shapes for visual display. The shape-prediction accuracies of the Improved Displacement Transfer Functions were then examined in view of finite-element-calculated deflections using different tapered cantilever tubular beams. It was found that by using the piecewise nonlinear strain representations, the shape-prediction accuracies could be greatly improved, especially for highly-tapered cantilever tubular beams.
NASA Astrophysics Data System (ADS)
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2017-07-01
Fused Deposition Modeling (FDM) is one of the prominent additive manufacturing technologies for producing polymer products. FDM is a complex additive manufacturing process that can be influenced by many process conditions. The industrial demands required from the FDM process are increasing with higher level product functionality and properties. The functionality and performance of FDM manufactured parts are greatly influenced by the combination of many various FDM process parameters. Designers and researchers always pay attention to study the effects of FDM process parameters on different product functionalities and properties such as mechanical strength, surface quality, dimensional accuracy, build time and material consumption. However, very limited studies have been carried out to investigate and optimize the effect of FDM build parameters on wear performance. This study focuses on the effect of different build parameters on micro-structural and wear performance of FDM specimens using definitive screening design based quadratic model. This would reduce the cost and effort of additive manufacturing engineer to have a systematic approachto make decision among the manufacturing parameters to achieve the desired product quality.
Precise analytic approximations for the Bessel function J1 (x)
NASA Astrophysics Data System (ADS)
Maass, Fernando; Martin, Pablo
2018-03-01
Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.
Advanced multilateration theory, software development, and data processing: The MICRODOT system
NASA Technical Reports Server (NTRS)
Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.
1976-01-01
The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.
Statistical Method to Overcome Overfitting Issue in Rational Function Models
NASA Astrophysics Data System (ADS)
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Auto-FPFA: An Automated Microscope for Characterizing Genetically Encoded Biosensors.
Nguyen, Tuan A; Puhl, Henry L; Pham, An K; Vogel, Steven S
2018-05-09
Genetically encoded biosensors function by linking structural change in a protein construct, typically tagged with one or more fluorescent proteins, to changes in a biological parameter of interest (such as calcium concentration, pH, phosphorylation-state, etc.). Typically, the structural change triggered by alterations in the bio-parameter is monitored as a change in either fluorescent intensity, or lifetime. Potentially, other photo-physical properties of fluorophores, such as fluorescence anisotropy, molecular brightness, concentration, and lateral and/or rotational diffusion could also be used. Furthermore, while it is likely that multiple photo-physical attributes of a biosensor might be altered as a function of the bio-parameter, standard measurements monitor only a single photo-physical trait. This limits how biosensors are designed, as well as the accuracy and interpretation of biosensor measurements. Here we describe the design and construction of an automated multimodal-microscope. This system can autonomously analyze 96 samples in a micro-titer dish and for each sample simultaneously measure intensity (photon count), fluorescence lifetime, time-resolved anisotropy, molecular brightness, lateral diffusion time, and concentration. We characterize the accuracy and precision of this instrument, and then demonstrate its utility by characterizing three types of genetically encoded calcium sensors as well as a negative control.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Social stress reactivity alters reward and punishment learning
Frank, Michael J.; Allen, John J. B.
2011-01-01
To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems. PMID:20453038
Social stress reactivity alters reward and punishment learning.
Cavanagh, James F; Frank, Michael J; Allen, John J B
2011-06-01
To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems.
Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela
2018-01-19
OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of < 5%. Prediction accuracy for survival on day 1 was 75.2%. The most important differentiating factor was the interleukin-6 (IL-6) level on day 1. Favorable functional outcome, defined as Glasgow Outcome Scale scores of 4 and 5, was observed in 68.6% of patients. Favorable functional outcome at all time points had a prediction accuracy of 71.1% in the training data set, with procalcitonin on day 1 being the most important differentiating factor at all time points. A total of 148 patients (27%) developed VP shunt dependency. The most important differentiating factor was hyperglycemia on admission. CONCLUSIONS The multiple variable analysis capability of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.
Zhang, Yu-xin; Cheng, Zhi-feng; Xu, Zheng-ping; Bai, Jing
2015-01-01
In order to solve the problems such as complex operation, consumption for the carrier gas and long test period in traditional power transformer fault diagnosis approach based on dissolved gas analysis (DGA), this paper proposes a new method which is detecting 5 types of characteristic gas content in transformer oil such as CH4, C2H2, C2H4, C2H6 and H2 based on photoacoustic Spectroscopy and C2H2/C2H4, CH4/H2, C2H4/C2H6 three-ratios data are calculated. The support vector machine model was constructed using cross validation method under five support vector machine functions and four kernel functions, heuristic algorithms were used in parameter optimization for penalty factor c and g, which to establish the best SVM model for the highest fault diagnosis accuracy and the fast computing speed. Particles swarm optimization and genetic algorithm two types of heuristic algorithms were comparative studied in this paper for accuracy and speed in optimization. The simulation result shows that SVM model composed of C-SVC, RBF kernel functions and genetic algorithm obtain 97. 5% accuracy in test sample set and 98. 333 3% accuracy in train sample set, and genetic algorithm was about two times faster than particles swarm optimization in computing speed. The methods described in this paper has many advantages such as simple operation, non-contact measurement, no consumption for the carrier gas, long test period, high stability and sensitivity, the result shows that the methods described in this paper can instead of the traditional transformer fault diagnosis by gas chromatography and meets the actual project needs in transformer fault diagnosis.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Martin, Ralph J; Santiago, Bartolo
2015-09-01
Left ventricular (LV) function parameters have major diagnostic and prognostic importance in heart disease. Measurement of ventricular function with tomographic (SPECT) radionuclide ventriculography (MUGA) decreases camera time, improves contrast resolution, accuracy of interpretation and the overall reliability of the study as compared to planar MUGA. The relationship between these techniques is well established particularly with LV ejection fraction (LVEF), while there is limited data comparing the diastolic function parameters. Our goal was to validate the LV function parameters in our Hispanic population. Studies from 44 patients, available from 2009-2010, were retrospectively evaluated. LVEF showed a good correlation between the techniques (r=0.94) with an average difference of 3.8%. In terms of categorizing the results as normal or abnormal, this remained unchanged in 95% of the cases (p=0.035). For the peak filling rate, there was a moderate correlation between the techniques (r=0.71), whereas the diagnosis remained unchanged in 89% of cases (p=0.0004). Time to peak filling values only demonstrated a weak correlation (r=0.22). Nevertheless, the diagnosis remained the same in 68% of the cases (p=0.089). Systolic function results in our study were well below the 7-10% difference reported in the literature. Only a weak to moderate correlation was observed with the diastolic function parameters. Comparison with echocardiogram (not available) may be of benefit to evaluate which of these techniques results in more accurate diastolic function parameters.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Alfimova, M V; Golimbet, V E; Lebedeva, I S; Korovaĭtseva, G I; Lezheĭko, T V
2014-01-01
We studied influence of the anxiety-related trait Harm Avoidance and the COMT gene, which is an important modulator of prefrontal functioning, on event-related potentials in oddball paradigm and performance effectiveness of selective attention. For 50 individuals accuracy and time of searching words among letters at any desired rate and then under an instruction to perform the task as quickly and accurate as possible were measured. Scores on the Harm Avoidance scale from Cloninger's Temperament and Character Inventory, N100 and P300 parameters, and COMTVa1158Met genotypes were obtained for them as well. Searching accuracy and time were mainly related to N100 amplitude. The COMT genotype and Harm Avoidance did not affect N100 amplitude; however, the N100 amplitude modulated their effects on accuracy and time dynamics. Harm Avoidance was positively correlated with P300 latency. The results suggest that anxiety and the COMT gene effects on performance effectiveness of selective attention depend on cognitive processes reflected in N100 parameters.
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
Determination of sex from the patella in a contemporary Spanish population.
Peckmann, Tanya R; Meek, Susan; Dilkie, Natasha; Rozendaal, Andrew
2016-11-01
The skull and pelvis have been used for the determination of sex for unknown human remains. However, in forensic cases where skeletal remains often exhibit postmortem damage and taphonomic changes the patella may be used for the determination of sex as it is a preservationally favoured bone. The goal of the present research was to derive discriminant function equations from the patella for estimation of sex from a contemporary Spanish population. Six parameters were measured on 106 individuals (55 males and 51 females), ranging in age from 22 to 85 years old, from the Granada Osteological Collection. The statistical analyses showed that all variables were sexually dimorphic. Discriminant function score equations were generated for use in sex determination. The overall accuracy of sex classification ranged from 75.2% to 84.8% for the direct method and 75.5%-83.8% for the stepwise method. When the South African White discriminant functions were applied to the Spanish sample they showed high accuracy rates for sexing female patellae (90%-95.9%) and low accuracy rates for sexing male patellae (52.7%-58.2%). When the South African Black discriminant functions were applied to the Spanish sample they showed high accuracy rates for sexing male patellae (90.9%) and low accuracy rates for sexing female patellae (70%-75.5%). The patella was shown to be useful for sex determination in the contemporary Spanish population. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
The potential of multiparametric MRI of the breast
Pinker, Katja; Helbich, Thomas H
2017-01-01
MRI is an essential tool in breast imaging, with multiple established indications. Dynamic contrast-enhanced MRI (DCE-MRI) is the backbone of any breast MRI protocol and has an excellent sensitivity and good specificity for breast cancer diagnosis. DCE-MRI provides high-resolution morphological information, as well as some functional information about neoangiogenesis as a tumour-specific feature. To overcome limitations in specificity, several other functional MRI parameters have been investigated and the application of these combined parameters is defined as multiparametric MRI (mpMRI) of the breast. MpMRI of the breast can be performed at different field strengths (1.5–7 T) and includes both established (diffusion-weighted imaging, MR spectroscopic imaging) and novel MRI parameters (sodium imaging, chemical exchange saturation transfer imaging, blood oxygen level-dependent MRI), as well as hybrid imaging with positron emission tomography (PET)/MRI and different radiotracers. Available data suggest that multiparametric imaging using different functional MRI and PET parameters can provide detailed information about the underlying oncogenic processes of cancer development and progression and can provide additional specificity. This article will review the current and emerging functional parameters for mpMRI of the breast for improved diagnostic accuracy in breast cancer. PMID:27805423
Mirea, Oana; Pagourelias, Efstathios D; Duchenne, Jurgen; Bogaert, Jan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe
2018-01-01
The purpose of this study was to compare the accuracy of vendor-specific and independent strain analysis tools to detect regional myocardial function abnormality in a clinical setting. Speckle tracking echocardiography has been considered a promising tool for the quantitative assessment of regional myocardial function. However, the potential differences among speckle tracking software with regard to their accuracy in identifying regional abnormality has not been studied extensively. Sixty-three subjects (5 healthy volunteers and 58 patients) were examined with 7 different ultrasound machines during 5 days. All patients had experienced a previous myocardial infarction, which was characterized by cardiac magnetic resonance with late gadolinium enhancement. Segmental peak systolic (PS), end-systolic (ES) and post-systolic strain (PSS) measurements were obtained with 6 vendor-specific software tools and 2 independent strain analysis tools. Strain parameters were compared between fully scarred and scar-free segments. Receiver-operating characteristic curves testing the ability of strain parameters and derived indexes to discriminate between these segments were compared among vendors. The average strain values calculated for normal segments ranged from -15.1% to -20.7% for PS, -14.9% to -20.6% for ES, and -16.1% to -21.4% for PSS. Significantly lower values of strain (p < 0.05) were found in segments with transmural scar by all vendors, with values ranging from -7.4% to -11.1% for PS, -7.7% to -10.8% for ES, and -10.5% to -14.3% for PSS. Accuracy in identifying transmural scar ranged from acceptable to excellent (area under the curve 0.74 to 0.83 for PS and ES and 0.70 to 0.78 for PSS). Significant differences were found among vendors (p < 0.05). All vendors had a significantly lower accuracy to detect scars in the basal segments compared with scars in the apex (p < 0.05). The accuracy of identifying regional abnormality differs significantly among vendors. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T
2017-07-01
Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
USDA-ARS?s Scientific Manuscript database
Saturated hydraulic conductivity Ksat is a fundamental characteristic in modeling flow and contaminant transport in soils and sediments. Therefore, many models have been developed to estimate Ksat from easily measureable parameters, such as textural properties, bulk density, etc. However, Ksat is no...
The conical scanner evaluation system design
NASA Technical Reports Server (NTRS)
Cumella, K. E.; Bilanow, S.; Kulikov, I. B.
1982-01-01
The software design for the conical scanner evaluation system is presented. The purpose of this system is to support the performance analysis of the LANDSAT-D conical scanners, which are infrared horizon detection attitude sensors designed for improved accuracy. The system consists of six functionally independent subsystems and five interface data bases. The system structure and interfaces of each of the subsystems is described and the content, format, and file structure of each of the data bases is specified. For each subsystem, the functional logic, the control parameters, the baseline structure, and each of the subroutines are described. The subroutine descriptions include a procedure definition and the input and output parameters.
Petersen, Nick; Perrin, David; Newhauser, Wayne; Zhang, Rui
2017-01-01
The purpose of this study was to evaluate the impact of selected configuration parameters that govern multileaf collimator (MLC) transmission and rounded leaf offset in a commercial treatment planning system (TPS) (Pinnacle 3 , Philips Medical Systems, Andover, MA, USA) on the accuracy of intensity-modulated radiation therapy (IMRT) dose calculation. The MLC leaf transmission factor was modified based on measurements made with ionization chambers. The table of parameters containing rounded-leaf-end offset values was modified by measuring the radiation field edge as a function of leaf bank position with an ionization chamber in a scanning water-tank dosimetry system and comparing the locations to those predicted by the TPS. The modified parameter values were validated by performing IMRT quality assurance (QA) measurements on 19 gantry-static IMRT plans. Planar dose measurements were performed with radiographic film and a diode array (MapCHECK2) and compared to TPS calculated dose distributions using default and modified configuration parameters. Based on measurements, the leaf transmission factor was changed from a default value of 0.001 to 0.005. Surprisingly, this modification resulted in a small but statistically significant worsening of IMRT QA gamma-index passing rate, which revealed that the overall dosimetric accuracy of the TPS depends on multiple configuration parameters in a manner that is coupled and not intuitive because of the commissioning protocol used in our clinic. The rounded leaf offset table had little room for improvement, with the average difference between the default and modified offset values being -0.2 ± 0.7 mm. While our results depend on the current clinical protocols, treatment unit and TPS used, the methodology used in this study is generally applicable. Different clinics could potentially obtain different results and improve their dosimetric accuracy using our approach.
NASA Astrophysics Data System (ADS)
Lee, Hyunki; Kim, Min Young; Moon, Jeon Il
2017-12-01
Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.
Rahaman, Obaidur; Estrada, Trilce P.; Doren, Douglas J.; Taufer, Michela; Brooks, Charles L.; Armen, Roger S.
2011-01-01
The performance of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for “step 2 discrimination” were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only “interacting” ligand atoms as the “effective size” of the ligand, and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and five-fold cross validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new dataset (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ dataset where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts. PMID:21644546
Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S
2011-09-26
The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.
CO2 laser ranging systems study
NASA Technical Reports Server (NTRS)
Filippi, C. A.
1975-01-01
The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.
A Parametric Rosetta Energy Function Analysis with LK Peptides on SAM Surfaces.
Lubin, Joseph H; Pacella, Michael S; Gray, Jeffrey J
2018-05-08
Although structures have been determined for many soluble proteins and an increasing number of membrane proteins, experimental structure determination methods are limited for complexes of proteins and solid surfaces. An economical alternative or complement to experimental structure determination is molecular simulation. Rosetta is one software suite that models protein-surface interactions, but Rosetta is normally benchmarked on soluble proteins. For surface interactions, the validity of the energy function is uncertain because it is a combination of independent parameters from energy functions developed separately for solution proteins and mineral surfaces. Here, we assess the performance of the RosettaSurface algorithm and test the accuracy of its energy function by modeling the adsorption of leucine/lysine (LK)-repeat peptides on methyl- and carboxy-terminated self-assembled monolayers (SAMs). We investigated how RosettaSurface predictions for this system compare with the experimental results, which showed that on both surfaces, LK-α peptides folded into helices and LK-β peptides held extended structures. Utilizing this model system, we performed a parametric analysis of Rosetta's Talaris energy function and determined that adjusting solvation parameters offered improved predictive accuracy. Simultaneously increasing lysine carbon hydrophilicity and the hydrophobicity of the surface methyl head groups yielded computational predictions most closely matching the experimental results. De novo models still should be interpreted skeptically unless bolstered in an integrative approach with experimental data.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.
Blumberg, Leonid M
2017-03-31
If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.
Prediction of Spirometric Forced Expiratory Volume (FEV1) Data Using Support Vector Regression
NASA Astrophysics Data System (ADS)
Kavitha, A.; Sujatha, C. M.; Ramakrishnan, S.
2010-01-01
In this work, prediction of forced expiratory volume in 1 second (FEV1) in pulmonary function test is carried out using the spirometer and support vector regression analysis. Pulmonary function data are measured with flow volume spirometer from volunteers (N=175) using a standard data acquisition protocol. The acquired data are then used to predict FEV1. Support vector machines with polynomial kernel function with four different orders were employed to predict the values of FEV1. The performance is evaluated by computing the average prediction accuracy for normal and abnormal cases. Results show that support vector machines are capable of predicting FEV1 in both normal and abnormal cases and the average prediction accuracy for normal subjects was higher than that of abnormal subjects. Accuracy in prediction was found to be high for a regularization constant of C=10. Since FEV1 is the most significant parameter in the analysis of spirometric data, it appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.
de Sá, Paula Morisco; Castro, Hermano Albuquerque; Lopes, Agnaldo José; Melo, Pedro Lopes de
2016-01-01
The current reference test for the detection of respiratory abnormalities in asbestos-exposed workers is spirometry. However, spirometry has several shortcomings that greatly affect the efficacy of current asbestos control programs. The forced oscillation technique (FOT) represents the current state-of-the-art technique in the assessment of lung function. This method provides a detailed analysis of respiratory resistance and reactance at different oscillatory frequencies during tidal breathing. Here, we evaluate the FOT as an alternative method to standard spirometry for the early detection and quantification of respiratory abnormalities in asbestos-exposed workers. Seventy-two subjects were analyzed. The control group was composed of 33 subjects with a normal spirometric exam who had no history of smoking or pulmonary disease. Thirty-nine subjects exposed to asbestos were also studied, including 32 volunteers in radiological category 0/0 and 7 volunteers with radiological categories of 0/1 or 1/1. FOT data were interpreted using classical parameters as well as integer (InOr) and fractional-order (FrOr) modeling. The diagnostic accuracy was evaluated by investigating the area under the receiver operating characteristic curve (AUC). Exposed workers presented increased obstruction (resistance p<0.001) and a reduced compliance (p<0.001), with a predominance of obstructive changes. The FOT parameter changes were correlated with the standard pulmonary function analysis methods (R = -0.52, p<0.001). Early respiratory abnormalities were identified with a high diagnostic accuracy (AUC = 0.987) using parameters obtained from the FrOr modeling. This accuracy was significantly better than those obtained with classical (p<0.001) and InOr (p<0.001) model parameters. The FOT improved our knowledge about the biomechanical abnormalities in workers exposed to asbestos. Additionally, a high diagnostic accuracy in the diagnosis of early respiratory abnormalities in asbestos-exposed workers was obtained. This makes the FOT particularly useful as a screening tool in the context of asbestos control and elimination. Moreover, it can facilitate epidemiological research and the longitudinal follow-up of asbestos exposure and asbestos-related diseases.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
New first- and second-order displacement transfer functions have been developed for deformed shape calculations of nonuniform cross-sectional beam structures such as aircraft wings. The displacement transfer functions are expressed explicitly in terms of beam geometrical parameters and surface strains (uniaxial bending strains) obtained at equally spaced strain stations along the surface of the beam structure. By inputting the measured or analytically calculated surface strains into the displacement transfer functions, one could calculate local slopes, deflections, and cross-sectional twist angles of the nonuniform beam structure for mapping the overall structural deformed shapes for visual display. The accuracy of deformed shape calculations by the first- and second-order displacement transfer functions are determined by comparing these values to the analytically predicted values obtained from finite element analyses. This comparison shows that the new displacement transfer functions could quite accurately calculate the deformed shapes of tapered cantilever tubular beams with different tapered angles. The accuracy of the present displacement transfer functions also are compared to those of the previously developed displacement transfer functions.
Ravald, L; Fornstedt, T
2001-01-26
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Accuracy of Time Phasing Aircraft Development using the Continuous Distribution Function
2015-03-26
Breusch - Pagan test ; the reported p-value of 0.5264 fails to rejects the null hypothesis of constant... Breusch - Pagan Test : P-value – 0.6911 0 2 4 6 8 10 12 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1 Shapiro-Wilk W Test Prob. < W: 0.9849 -1...Weibull Scale Parameter β – Constant Variance Breusch - Pagan Test : P-value – 0.5176 Beta Shape Parameter α – Influential Data
Analytic modeling of aerosol size distributions
NASA Technical Reports Server (NTRS)
Deepack, A.; Box, G. P.
1979-01-01
Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.
Mouton, Stéphanie; Ridon, Héléne; Fertin, Marie; Pentiah, Anju Duva; Goémine, Céline; Petyt, Grégory; Lamblin, Nicolas; Coisne, Augustin; Foucher-Hossein, Claude; Montaigne, David; de Groote, Pascal
2017-10-15
Right ventricular (RV) systolic function is a powerful prognostic factor in patients with systolic heart failure. The accurate estimation of RV function remains difficult. The aim of the study was to determine the diagnostic accuracy of 2D-speckle tracking RV strain in patients with systolic heart failure, analyzing both free and posterolateral walls. Seventy-six patients with dilated cardiopathy (left ventricular end-diastolic volume≥75ml/m 2 ) and left ventricular ejection fraction≤45% had an analysis of the RV strain. Feasibility, reproducibility and diagnostic accuracy of RV strain were analyzed and compared to other echocardiographic parameters of RV function. RV dysfunction was defined as a RV ejection fraction≤40% measured by radionuclide angiography. RV strain feasibility was 93.9% for the free-wall and 79.8% for the posterolateral wall. RV strain reproducibility was good (intra-observer and inter-observer bias and limits of agreement of 0.16±1.2% [-2.2-2.5] and 0.84±2.4 [-5.5-3.8], respectively). Patients with left heart failure have a RV systolic dysfunction that can be unmasked by advanced echocardiographic imaging: mean RV strain was -21±5.7% in patients without RV dysfunction and -15.8±5.1% in patients with RV dysfunction (p=0.0001). Mean RV strain showed the highest diagnostic accuracy to predict depressed RVEF (area under the curve (AUC) 0.75) with moderate sensitivity (60.5%) but high specificity (87.5%) using a cutoff value of -16%. RV strain seems to be a promising and more efficient measure than previous RV echocardiographic parameters for the diagnosis of RV systolic dysfunction. Copyright © 2017 Elsevier B.V. All rights reserved.
Busk, P K; Pilgaard, B; Lezyk, M J; Meyer, A S; Lange, L
2017-04-12
Carbohydrate-active enzymes are found in all organisms and participate in key biological processes. These enzymes are classified in 274 families in the CAZy database but the sequence diversity within each family makes it a major task to identify new family members and to provide basis for prediction of enzyme function. A fast and reliable method for de novo annotation of genes encoding carbohydrate-active enzymes is to identify conserved peptides in the curated enzyme families followed by matching of the conserved peptides to the sequence of interest as demonstrated for the glycosyl hydrolase and the lytic polysaccharide monooxygenase families. This approach not only assigns the enzymes to families but also provides functional prediction of the enzymes with high accuracy. We identified conserved peptides for all enzyme families in the CAZy database with Peptide Pattern Recognition. The conserved peptides were matched to protein sequence for de novo annotation and functional prediction of carbohydrate-active enzymes with the Hotpep method. Annotation of protein sequences from 12 bacterial and 16 fungal genomes to families with Hotpep had an accuracy of 0.84 (measured as F1-score) compared to semiautomatic annotation by the CAZy database whereas the dbCAN HMM-based method had an accuracy of 0.77 with optimized parameters. Furthermore, Hotpep provided a functional prediction with 86% accuracy for the annotated genes. Hotpep is available as a stand-alone application for MS Windows. Hotpep is a state-of-the-art method for automatic annotation and functional prediction of carbohydrate-active enzymes.
Inclusive production of small radius jets in heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Zhong-Bo; Ringer, Felix; Vitev, Ivan
Here, we develop a new formalism to describe the inclusive production of small radius jets in heavy-ion collisions, which is consistent with jet calculations in the simpler proton–proton system. Only at next-to-leading order (NLO) and beyond, the jet radius parameter R and the jet algorithm dependence of the jet cross section can be studied and a meaningful comparison to experimental measurements is possible. We are able to consistently achieve NLO accuracy by making use of the recently developed semi-inclusive jet functions within Soft Collinear Effective Theory (SCET). Additionally, single logarithms of the jet size parameter αmore » $$n\\atop{s}$$ln nR leading logarithmic (NLL R) accuracy in proton–proton collisions. The medium modified semi-inclusive jet functions are obtained within the framework of SCET with Glauber gluons that describe the interaction of jets with the medium. We also present numerical results for the suppression of inclusive jet cross sections in heavy ion collisions at the LHC and the formalism developed here can be extended directly to corresponding jet substructure observables.« less
Inclusive production of small radius jets in heavy-ion collisions
Kang, Zhong-Bo; Ringer, Felix; Vitev, Ivan
2017-03-31
Here, we develop a new formalism to describe the inclusive production of small radius jets in heavy-ion collisions, which is consistent with jet calculations in the simpler proton–proton system. Only at next-to-leading order (NLO) and beyond, the jet radius parameter R and the jet algorithm dependence of the jet cross section can be studied and a meaningful comparison to experimental measurements is possible. We are able to consistently achieve NLO accuracy by making use of the recently developed semi-inclusive jet functions within Soft Collinear Effective Theory (SCET). Additionally, single logarithms of the jet size parameter αmore » $$n\\atop{s}$$ln nR leading logarithmic (NLL R) accuracy in proton–proton collisions. The medium modified semi-inclusive jet functions are obtained within the framework of SCET with Glauber gluons that describe the interaction of jets with the medium. We also present numerical results for the suppression of inclusive jet cross sections in heavy ion collisions at the LHC and the formalism developed here can be extended directly to corresponding jet substructure observables.« less
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang
In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.
Application of data fusion technology based on D-S evidence theory in fire detection
NASA Astrophysics Data System (ADS)
Cai, Zhishan; Chen, Musheng
2015-12-01
Judgment and identification based on single fire characteristic parameter information in fire detection is subject to environmental disturbances, and accordingly its detection performance is limited with the increase of false positive rate and false negative rate. The compound fire detector employs information fusion technology to judge and identify multiple fire characteristic parameters in order to improve the reliability and accuracy of fire detection. The D-S evidence theory is applied to the multi-sensor data-fusion: first normalize the data from all sensors to obtain the normalized basic probability function of the fire occurrence; then conduct the fusion processing using the D-S evidence theory; finally give the judgment results. The results show that the method meets the goal of accurate fire signal identification and increases the accuracy of fire alarm, and therefore is simple and effective.
Universal fitting formulae for baryon oscillation surveys
NASA Astrophysics Data System (ADS)
Blake, Chris; Parkinson, David; Bassett, Bruce; Glazebrook, Karl; Kunz, Martin; Nichol, Robert C.
2006-01-01
The next generation of galaxy surveys will attempt to measure the baryon oscillations in the clustering power spectrum with high accuracy. These oscillations encode a preferred scale which may be used as a standard ruler to constrain cosmological parameters and dark energy models. In this paper we present simple analytical fitting formulae for the accuracy with which the preferred scale may be determined in the tangential and radial directions by future spectroscopic and photometric galaxy redshift surveys. We express these accuracies as a function of survey parameters such as the central redshift, volume, galaxy number density and (where applicable) photometric redshift error. These fitting formulae should greatly increase the efficiency of optimizing future surveys, which requires analysis of a potentially vast number of survey configurations and cosmological models. The formulae are calibrated using a grid of Monte Carlo simulations, which are analysed by dividing out the overall shape of the power spectrum before fitting a simple decaying sinusoid to the oscillations. The fitting formulae reproduce the simulation results with a fractional scatter of 7 per cent (10 per cent) in the tangential (radial) directions over a wide range of input parameters. We also indicate how sparse-sampling strategies may enhance the effective survey area if the sampling scale is much smaller than the projected baryon oscillation scale.
Time Perception and Depressive Realism: Judgment Type, Psychophysical Functions and Bias
Kornbrot, Diana E.; Msetfi, Rachel M.; Grimwood, Melvyn J.
2013-01-01
The effect of mild depression on time estimation and production was investigated. Participants made both magnitude estimation and magnitude production judgments for five time intervals (specified in seconds) from 3 sec to 65 sec. The parameters of the best fitting psychophysical function (power law exponent, intercept, and threshold) were determined individually for each participant in every condition. There were no significant effects of mood (high BDI, low BDI) or judgment (estimation, production) on the mean exponent, n = .98, 95% confidence interval (.96–1.04) or on the threshold. However, the intercept showed a ‘depressive realism’ effect, where high BDI participants had a smaller deviation from accuracy and a smaller difference between estimation and judgment than low BDI participants. Accuracy bias was assessed using three measures of accuracy: difference, defined as psychological time minus physical time, ratio, defined as psychological time divided by physical time, and a new logarithmic accuracy measure defined as ln (ratio). The ln (ratio) measure was shown to have approximately normal residuals when subjected to a mixed ANOVA with mood as a between groups explanatory factor and judgment and time category as repeated measures explanatory factors. The residuals of the other two accuracy measures flagrantly violated normality. The mixed ANOVAs of accuracy also showed a strong depressive realism effect, just like the intercepts of the psychophysical functions. There was also a strong negative correlation between estimation and production judgments. Taken together these findings support a clock model of time estimation, combined with additional cognitive mechanisms to account for the depressive realism effect. The findings also suggest strong methodological recommendations. PMID:23990960
Bayesian Estimation of Combined Accuracy for Tests with Verification Bias
Broemeling, Lyle D.
2011-01-01
This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
NASA Astrophysics Data System (ADS)
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-01
We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-27
Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
Dong, Zhixu; Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-04-13
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor's measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved.
Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-01-01
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor’s measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved. PMID:29652836
Optimization of cutting parameters for machining time in turning process
NASA Astrophysics Data System (ADS)
Mavliutov, A. R.; Zlotnikov, E. G.
2018-03-01
This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.
Szymczyk, Patrycja; Ziółkowski, Grzegorz; Junka, Adam; Chlebus, Edward
2018-06-08
Unlike conventional manufacturing techniques, additive manufacturing (AM) can form objects of complex shape and geometry in an almost unrestricted manner. AM’s advantages include higher control of local process parameters and a possibility to use two or more various materials during manufacture. In this work, we applied one of AM technologies, selective laser melting, using Ti6Al7Nb alloy to produce biomedical functional structures (BFS) in the form of bone implants. Five types of BFS structures (A1, A2, A3, B, C) were manufactured for the research. The aim of this study was to investigate such technological aspects as architecture, manufacturing methods, process parameters, surface modification, and to compare them with such functional properties such as accuracy, mechanical, and biological in manufactured implants. Initial in vitro studies were performed using osteoblast cell line hFOB 1.19 (ATCC CRL-11372) (American Type Culture Collection). The results of the presented study confirm high applicative potential of AM to produce bone implants of high accuracy and geometric complexity, displaying desired mechanical properties. The experimental tests, as well as geometrical accuracy analysis, showed that the square shaped (A3) BFS structures were characterized by the lowest deviation range and smallestanisotropy of mechanical properties. Moreover, cell culture experiments performed in this study proved that the designed and obtained implant’s internal porosity (A3) enhances the growth of bone cells (osteoblasts) and can obtain predesigned biomechanical characteristics comparable to those of the bone tissue.
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Verification of the H2O Linelists with Theoretically Developed Tools
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, R.; Lavrentieva, N. N.; Dudaryonok, A. S.
2013-01-01
Two basic rules (i.e., the pair identity and the smooth variation rules) resulting from the properties of the energy levels and wave functions of H2O states govern how the spectroscopic parameters vary with the H2O lines within the individually defined groups of lines. With these rules, for those lines involving high j states in the same groups, variations of all their spectroscopic parameters (i.e., the transition frequency, intensity, pressure broadened half-width, pressure-induced shift, and temperature exponent) can be well monitored. Thus, the rules can serve as simple and effective tools to screen the H2O spectroscopic data listed in the HITRAN database and verify the latter's accuracies. By checking violations of the rules occurring among the data within the individual groups, possible errors can be picked up and also possible missing lines in the linelist whose intensities are above the threshold can be identified. We have used these rules to check the accuracies of the spectroscopic parameters and the completeness of the linelists for several important H2O vibrational bands. Based on our results, the accuracy of the line frequencies in HITRAN 2008 is consistent. For the line intensity, we have found that there are a substantial number of lines whose intensity values are questionable. With respect to other parameters, many mistakes have been found. The above claims are consistent with a well known fact that values of these parameters in HITRAN contain larger uncertainties. Furthermore, supplements of the missing line list consisting of line assignments and positions can be developed from the screening results.
Preliminary GAOFEN-3 Insar dem Accuracy Analysis
NASA Astrophysics Data System (ADS)
Chen, Q.; Li, T.; Tang, X.; Gao, X.; Zhang, X.
2018-04-01
GF-3 satellite, the first C band and full-polarization SAR satellite of China with spatial resolution of 1 m, was successfully launched in August 2016. We analyze the error sources of GF-3 satellite in this paper, and provide the interferometric calibration model based on range function, Doppler shift equation and interferometric phase function, and interferometric parameters calibrated using the three-dimensional coordinates of ground control points. Then, we conduct the experimental two pairs of images in fine stripmap I mode covering Songshan of Henan Province and Tangshan of Hebei Province, respectively. The DEM data are assessed using SRTM DEM, ICESat-GLAS points, and ground control points database obtained using ZY-3 satellite to validate the accuracy of DEM elevation. The experimental results show that the accuracy of DEM extracted from GF-3 satellite SAR data can meet the requirements of topographic mapping in mountain and alpine regions at the scale of 1 : 50000 in China. Besides, it proves that GF-3 satellite has the potential of interferometry.
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar
With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less
Enhancing the accuracy of the Fowler method for monitoring non-constant work functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedl, R., E-mail: roland.friedl@physik.uni-augsburg.de
2016-04-15
The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler’s theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization likemore » that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation.« less
Submillimeter and far-infrared dielectric properties of thin films
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Wollack, Edward J.
2016-07-01
The complex dielectric function enables the study of a material's refractive and absorptive properties and provides information on a material's potential for practical application. Commonly employed line shape profile functions from the literature are briefly surveyed and their suitability for representation of dielectric material properties are discussed. An analysis approach to derive a material's complex dielectric function from observed transmittance spectra in the far-infrared and submillimeter regimes is presented. The underlying model employed satisfies the requirements set by the Kramers-Kronig relations. The dielectric function parameters derived from this approachtypically reproduce the observed transmittance spectra with an accuracy of < 4%.
Wang, Hai-peng; Bi, Zheng-yang; Zhou, Yang; Zhou, Yu-xuan; Wang, Zhi-gong; Lv, Xiao-ying
2017-01-01
Voluntary participation of hemiplegic patients is crucial for functional electrical stimulation therapy. A wearable functional electrical stimulation system has been proposed for real-time volitional hand motor function control using the electromyography bridge method. Through a series of novel design concepts, including the integration of a detecting circuit and an analog-to-digital converter, a miniaturized functional electrical stimulation circuit technique, a low-power super-regeneration chip for wireless receiving, and two wearable armbands, a prototype system has been established with reduced size, power, and overall cost. Based on wrist joint torque reproduction and classification experiments performed on six healthy subjects, the optimized surface electromyography thresholds and trained logistic regression classifier parameters were statistically chosen to establish wrist and hand motion control with high accuracy. Test results showed that wrist flexion/extension, hand grasp, and finger extension could be reproduced with high accuracy and low latency. This system can build a bridge of information transmission between healthy limbs and paralyzed limbs, effectively improve voluntary participation of hemiplegic patients, and elevate efficiency of rehabilitation training. PMID:28250759
Terminal navigation analysis for the 1980 comet Encke slow flyby mission
NASA Technical Reports Server (NTRS)
Jacobson, R. A.; Mcdanell, J. P.; Rinker, G. C.
1973-01-01
The initial results of a terminal navigation analysis for the proposed 1980 solar electric slow flyby mission to the comet Encke are presented. The navigation technique employs onboard optical measurements with the scientific television camera, groundbased observations of the spacecraft and comet, and groundbased orbit determination and thrust vector update computation. The knowledge and delivery accuracies of the spacecraft are evaluated as a function of the important parameters affecting the terminal navigation. These include optical measurement accuracy, thruster noise level, duration of the planned terminal coast period, comet ephemeris uncertainty, guidance initiation time, guidance update frequency, and optical data rate.
NASA Astrophysics Data System (ADS)
Dang, Van Tuan; Lafon, Pascal; Labergere, Carl
2017-10-01
In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Potential accuracy of methods of laser Doppler anemometry in the single-particle scattering mode
NASA Astrophysics Data System (ADS)
Sobolev, V. S.; Kashcheeva, G. A.
2017-05-01
Potential accuracy of methods of laser Doppler anemometry is determined for the singleparticle scattering mode where the only disturbing factor is shot noise generated by the optical signal itself. The problem is solved by means of computer simulations with the maximum likelihood method. The initial parameters of simulations are chosen to be the number of real or virtual interference fringes in the measurement volume of the anemometer, the signal discretization frequency, and some typical values of the signal/shot noise ratio. The parameters to be estimated are the Doppler frequency as the basic parameter carrying information about the process velocity, the signal amplitude containing information about the size and concentration of scattering particles, and the instant when the particles arrive at the center of the measurement volume of the anemometer, which is needed for reconstruction of the examined flow velocity as a function of time. The estimates obtained in this study show that shot noise produces a minor effect (0.004-0.04%) on the frequency determination accuracy in the entire range of chosen values of the initial parameters. For the signal amplitude and the instant when the particles arrive at the center of the measurement volume of the anemometer, the errors induced by shot noise are in the interval of 0.2-3.5%; if the number of interference fringes is sufficiently large (more than 20), the errors do not exceed 0.2% regardless of the shot noise level.
Peckmann, Tanya R; Orr, Kayla; Meek, Susan; Manolis, Sotiris K
2015-07-01
The determination of sex is an important part of building the biological profile for unknown human remains. Many of the bones traditionally used for the determination of sex are often found fragmented or incomplete in forensic and archaeological cases. The goal of the present research was to derive discriminant function equations from the talus, a preservationally favoured bone, for sexing skeletons from a contemporary Greek population. Nine parameters were measured on 182 individuals (96 males and 86 females) from the University of Athens Human Skeletal Reference Collection. The individuals ranged in age from 20 to 99 years old. The statistical analyses showed that all measured parameters were sexually dimorphic. Discriminant function score equations were generated for use in sex determination. The average accuracy of sex classification ranged from 65.2% to 93.4% for the univariate analysis, 90%-96.5% for the direct method and 86.7% for the stepwise method. Comparisons to other populations were made. Overall, the cross-validated accuracies ranged from 65.5% to 83.2% and males were most often correctly identified. The talus was shown to be useful for sex determination in the modern Greek population. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Heidelberg Retina Tomograph 3 machine learning classifiers for glaucoma detection
Townsend, K A; Wollstein, G; Danks, D; Sung, K R; Ishikawa, H; Kagemann, L; Gabriele, M L; Schuman, J S
2010-01-01
Aims To assess performance of classifiers trained on Heidelberg Retina Tomograph 3 (HRT3) parameters for discriminating between healthy and glaucomatous eyes. Methods Classifiers were trained using HRT3 parameters from 60 healthy subjects and 140 glaucomatous subjects. The classifiers were trained on all 95 variables and smaller sets created with backward elimination. Seven types of classifiers, including Support Vector Machines with radial basis (SVM-radial), and Recursive Partitioning and Regression Trees (RPART), were trained on the parameters. The area under the ROC curve (AUC) was calculated for classifiers, individual parameters and HRT3 glaucoma probability scores (GPS). Classifier AUCs and leave-one-out accuracy were compared with the highest individual parameter and GPS AUCs and accuracies. Results The highest AUC and accuracy for an individual parameter were 0.848 and 0.79, for vertical cup/disc ratio (vC/D). For GPS, global GPS performed best with AUC 0.829 and accuracy 0.78. SVM-radial with all parameters showed significant improvement over global GPS and vC/ D with AUC 0.916 and accuracy 0.85. RPART with all parameters provided significant improvement over global GPS with AUC 0.899 and significant improvement over global GPS and vC/D with accuracy 0.875. Conclusions Machine learning classifiers of HRT3 data provide significant enhancement over current methods for detection of glaucoma. PMID:18523087
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
Guo, Hao; Liu, Lei; Chen, Junjie; Xu, Yong; Jie, Xiang
2017-01-01
Functional magnetic resonance imaging (fMRI) is one of the most useful methods to generate functional connectivity networks of the brain. However, conventional network generation methods ignore dynamic changes of functional connectivity between brain regions. Previous studies proposed constructing high-order functional connectivity networks that consider the time-varying characteristics of functional connectivity, and a clustering method was performed to decrease computational cost. However, random selection of the initial clustering centers and the number of clusters negatively affected classification accuracy, and the network lost neurological interpretability. Here we propose a novel method that introduces the minimum spanning tree method to high-order functional connectivity networks. As an unbiased method, the minimum spanning tree simplifies high-order network structure while preserving its core framework. The dynamic characteristics of time series are not lost with this approach, and the neurological interpretation of the network is guaranteed. Simultaneously, we propose a multi-parameter optimization framework that involves extracting discriminative features from the minimum spanning tree high-order functional connectivity networks. Compared with the conventional methods, our resting-state fMRI classification method based on minimum spanning tree high-order functional connectivity networks greatly improved the diagnostic accuracy for Alzheimer's disease. PMID:29249926
Characterization of a Low-Cost Multi-Parameter Sensor for Resource Applications: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, Aron M; Sengupta, Manajit; Andreas, Afshin M
Low-cost multi-parameter sensing and measurement devices enable cost-effective monitoring of the functional, operational reliability, efficiency, and resiliency of the electrical grid. The National Renewable Research Laboratory (NREL) Solar Radiation Research Laboratory (SRRL), in collaboration with Arable Labs Inc., deployed Arable Lab's Mark multi-parameter sensor system. The unique suite of system sensors measures the down-welling and upwelling shortwave solar resource and longwave radiation, humidity, air temperature, and ground temperature. This study describes the shortwave calibration, characteriza-tion, and validation of measurement accuracy of this instrument by comparison with existing instruments that are part of NREL-SRRL's Baseline Measurement System.
High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range
de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander
2016-01-01
Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1979-01-01
Design adequacy of the lead-lag compensator of the frequency loop, accuracy checking of the analytical expression for the electrical motor transfer function, and performance evaluation of the speed control servo of the digital tape recorder used on-board the 1976 Viking Mars Orbiters and Voyager 1977 Jupiter-Saturn flyby spacecraft are analyzed. The transfer functions of the most important parts of a simplified frequency loop used for test simulation are described and ten simulation cases are reported. The first four of these cases illustrate the method of selecting the most suitable transfer function for the hysteresis synchronous motor, while the rest verify and determine the servo performance parameters and alternative servo compensation schemes. It is concluded that the linear methods provide a starting point for the final verification/refinement of servo design by nonlinear time response simulation and that the variation of the parameters of the static/dynamic Coulomb friction is as expected in a long-life space mission environment.
Huang, Y; Sun, P; Zhang, Z; Jin, C
2017-07-10
Water vapor noise in the air affects the accuracy of optical parameters extracted from terahertz (THz) time-domain spectroscopy. In this paper, a numerical method was proposed to eliminate water vapor noise from the THz spectra. According to the Van Vleck-Weisskopf function and the linear absorption spectrum of water molecules in the HITRAN database, we simulated the water vapor absorption spectrum and real refractive index spectrum with a particular line width. The continuum effect of water vapor molecules was also considered. Theoretical transfer function of a different humidity was constructed through the theoretical calculation of the water vapor absorption coefficient and the real refractive index. The THz signal of the Lacidipine sample containing water vapor background noise in the continuous frequency domain of 0.5-1.8 THz was denoised by use of the method. The results show that the optical parameters extracted from the denoised signal are closer to the optical parameters in the dry nitrogen environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
We introduce a novel non-local ingredient for the construction of exchange density functionals: the reduced Hartree parameter, which is invariant under the uniform scaling of the density and represents the exact exchange enhancement factor for one- and two-electron systems. The reduced Hartree parameter is used together with the conventional meta-generalized gradient approximation (meta-GGA) semilocal ingredients (i.e., the electron density, its gradient, and the kinetic energy density) to construct a new generation exchange functional, termed u-meta-GGA. This u-meta-GGA functional is exact for the exchange of any one- and two-electron systems, is size-consistent and non-empirical, satisfies the uniform density scaling relation, andmore » recovers the modified gradient expansion derived from the semiclassical atom theory. For atoms, ions, jellium spheres, and molecules, it shows a good accuracy, being often better than meta-GGA exchange functionals. Our construction validates the use of the reduced Hartree ingredient in exchange-correlation functional development, opening the way to an additional rung in the Jacob’s ladder classification of non-empirical density functionals.« less
de Bessa, Jose; Rodrigues, Cicilia M; Chammas, Maria Cristina; Miranda, Eduardo P; Gomes, Cristiano M; Moscardi, Paulo R; Bessa, Marcia C; Molina, Carlos A; Tiraboschi, Ricardo B; Netto, Jose M; Denes, Francisco T
2018-01-01
Ureteropelvic junction obstruction (UPJO) is a common congenital anomaly leading to varying degrees of hydronephrosis (HN), ranging from no apparent effect on the renal function to atrophy. Evaluation of these children is based on Diuretic Renal Scintigraphy (DRS) and Ultrasonography (US). Recent studies have suggested that new parameters of conventional and color Doppler ultrasonography (CDUS) may be useful in discriminating which kidneys are obstructed. The present study aims to assess the diagnostic accuracy of such parameters in the diagnosis of obstruction in children with UPJO. We evaluated 44 patients (33 boys) with a mean age of 6.53 ± 4.39 years diagnosed with unilateral high-grade hydronephrosis (SFU grades 3 and 4). All underwent DRS and index tests (conventional US and CDUS to evaluate ureteral jets frequency) within a maximum interval of two weeks. Hydronephrotic units were reclassified according to the alternative grading system (AGS) proposed by Onen et al. Obstruction in the DRS was defined as a differential renal function <40% on the affected side and/or features indicating poor drainage function like T1/2 >20 minutes after the administration of furosemide, and a plateau or ascending pattern of the excretion curve. Nineteen hydronephrotic units (43.1%) were obstructed. Some degree of cortical atrophy-grades 3 (segmental) or 4 (diffuse)-was present in those obstructed units. AGS grades had 100% sensitivity, 76% of specificity and 86.4% of accuracy. The absence of ureteral jets had a sensitivity of 73.68%, a specificity of 100% with an accuracy of 88.6%. When we analyzed the two aspects together and considered obstructed the renal units classified as AGS grade 3 or 4 with no jets, sensitivity increased to 78.9%, accuracy to 92%, remaining with a maximum specificity of 100%. These features combined would allow us to avoid performing DRS in 61% of our patients, leaving more invasive tests to inconclusive cases. Although DRS remains the mainstay to distinguishing obstructive from non-obstructive kidneys, grade of hydronephrosis and frequency of ureteral jets, independently or in combination may be a reliable alternative in the mostly cases.This alternative approach has high accuracy, it is less invasive, easily reproducible and may play a role in the diagnosis of obstruction in pediatric population.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
NASA Technical Reports Server (NTRS)
Ito, K.
1983-01-01
Approximation schemes based on Legendre-tau approximation are developed for application to parameter identification problem for delay and partial differential equations. The tau method is based on representing the approximate solution as a truncated series of orthonormal functions. The characteristic feature of the Legendre-tau approach is that when the solution to a problem is infinitely differentiable, the rate of convergence is faster than any finite power of 1/N; higher accuracy is thus achieved, making the approach suitable for small N.
Joint measurement of complementary observables in moment tomography
NASA Astrophysics Data System (ADS)
Teo, Yong Siah; Müller, Christian R.; Jeong, Hyunseok; Hradil, Zdeněk; Řeháček, Jaroslav; Sánchez-Soto, Luis L.
Wigner and Husimi quasi-distributions, owing to their functional regularity, give the two archetypal and equivalent representations of all observable-parameters in continuous-variable quantum information. Balanced homodyning (HOM) and heterodyning (HET) that correspond to their associated sampling procedures, on the other hand, fare very differently concerning their state or parameter reconstruction accuracies. We present a general theory of a now-known fact that HET can be tomographically more powerful than balanced homodyning to many interesting classes of single-mode quantum states, and discuss the treatment for two-mode sources.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
Nazarian, Dalar; Ganesh, P.; Sholl, David S.
2015-09-30
We compiled a test set of chemically and topologically diverse Metal–Organic Frameworks (MOFs) with high accuracy experimentally derived crystallographic structure data. The test set was used to benchmark the performance of Density Functional Theory (DFT) functionals (M06L, PBE, PW91, PBE-D2, PBE-D3, and vdW-DF2) for predicting lattice parameters, unit cell volume, bonded parameters and pore descriptors. On average PBE-D2, PBE-D3, and vdW-DF2 predict more accurate structures, but all functionals predicted pore diameters within 0.5 Å of the experimental diameter for every MOF in the test set. The test set was also used to assess the variance in performance of DFT functionalsmore » for elastic properties and atomic partial charges. The DFT predicted elastic properties such as minimum shear modulus and Young's modulus can differ by an average of 3 and 9 GPa for rigid MOFs such as those in the test set. Moreover, we calculated the partial charges by vdW-DF2 deviate the most from other functionals while there is no significant difference between the partial charges calculated by M06L, PBE, PW91, PBE-D2 and PBE-D3 for the MOFs in the test set. We find that while there are differences in the magnitude of the properties predicted by the various functionals, these discrepancies are small compared to the accuracy necessary for most practical applications.« less
Improved digital filters for evaluating Fourier and Hankel transform integrals
Anderson, Walter L.
1975-01-01
New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less
NASA Astrophysics Data System (ADS)
Koitz, Ralph; Soini, Thomas M.; Genest, Alexander; Trickey, S. B.; Rösch, Notker
2012-07-01
The performance of eight generalized gradient approximation exchange-correlation (xc) functionals is assessed by a series of scalar relativistic all-electron calculations on octahedral palladium model clusters Pdn with n = 13, 19, 38, 55, 79, 147 and the analogous clusters Aun (for n up through 79). For these model systems, we determined the cohesive energies and average bond lengths of the optimized octahedral structures. We extrapolate these values to the bulk limits and compare with the corresponding experimental values. While the well-established functionals BP, PBE, and PW91 are the most accurate at predicting energies, the more recent forms PBEsol, VMTsol, and VT{84}sol significantly improve the accuracy of geometries. The observed trends are largely similar for both Pd and Au. In the same spirit, we also studied the scalability of the ionization potentials and electron affinities of the Pd clusters, and extrapolated those quantities to estimates of the work function. Overall, the xc functionals can be classified into four distinct groups according to the accuracy of the computed parameters. These results allow a judicious selection of xc approximations for treating transition metal clusters.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
Topology-driven phase transitions in the classical monomer-dimer-loop model.
Li, Sazi; Li, Wei; Chen, Ziyu
2015-06-01
In this work, we investigate the classical loop models doped with monomers and dimers on a square lattice, whose partition function can be expressed as a tensor network (TN). In the thermodynamic limit, we use the boundary matrix product state technique to contract the partition function TN, and determine the thermodynamic properties with high accuracy. In this monomer-dimer-loop model, we find a second-order phase transition between a trivial monomer-condensation and a loop-condensation (LC) phase, which cannot be distinguished by any local order parameter, while nevertheless the two phases have distinct topological properties. In the LC phase, we find two degenerate dominating eigenvalues in the transfer-matrix spectrum, as well as a nonvanishing (nonlocal) string order parameter, both of which identify the topological ergodicity breaking in the LC phase and can serve as the order parameter for detecting the phase transitions.
NASA Astrophysics Data System (ADS)
Tang, W.; Qin, J.; Yang, K.; Liu, S.; Lu, N.; Niu, X.
2015-12-01
Cloud parameters (cloud mask, effective particle radius and liquid/ice water path) are the important inputs in determining surface solar radiation (SSR). These parameters can be derived from MODIS with high accuracy but their temporal resolution is too low to obtain high temporal resolution SSR retrievals. In order to obtain hourly cloud parameters, the Artificial Neural Network (ANN) is applied in this study to directly construct a functional relationship between MODIS cloud products and Multi-functional Transport Satellite (MTSAT) geostationary satellite signals. Meanwhile, an efficient parameterization model for SSR retrieval is introduced and, when driven with MODIS atmospheric and land products, its root mean square error (RMSE) is about 100 W m-2 for 44 Baseline Surface Radiation Network (BSRN) stations. Once the estimated cloud parameters and other information (such as aerosol, precipitable water, ozone and so on) are input to the model, we can derive SSR at high spatio-temporal resolution. The retrieved SSR is first evaluated against hourly radiation data at three experimental stations in the Haihe River Basin of China. The mean bias error (MBE) and RMSE in hourly SSR estimate are 12.0 W m-2 (or 3.5 %) and 98.5 W m-2 (or 28.9 %), respectively. The retrieved SSR is also evaluated against daily radiation data at 90 China Meteorological Administration (CMA) stations. The MBEs are 9.8 W m-2 (5.4 %); the RMSEs in daily and monthly-mean SSR estimates are 34.2 W m-2 (19.1 %) and 22.1 W m-2 (12.3 %), respectively. The accuracy is comparable or even higher than other two radiation products (GLASS and ISCCP-FD), and the present method is more computationally efficient and can produce hourly SSR data at a spatial resolution of 5 km.
The discrete adjoint method for parameter identification in multibody system dynamics.
Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin
2018-01-01
The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.
Quantitative spectroscopy for the analysis of GOME data
NASA Technical Reports Server (NTRS)
Chance, K.
1997-01-01
Accurate analysis of the global ozone monitoring experiment (GOME) data to obtain atmospheric constituents requires reliable, traceable spectroscopic parameters for atmospheric absorption and scattering. Results are summarized for research that includes: the re-determination of Rayleigh scattering cross sections and phase functions for the 200 nm to 1000 nm range; the analysis of solar spectra to obtain a high-resolution reference spectrum with excellent absolute vacuum wavelength calibration; Ring effect cross sections and phase functions determined directly from accurate molecular parameters of N2 and O2; O2 A band line intensities and pressure broadening coefficients; and the analysis of absolute accuracies for ultraviolet and visible absorption cross sections of O3 and other trace species measurable by GOME.
Astrophysics to z approx. 10 with Gravitational Waves
NASA Technical Reports Server (NTRS)
Stebbins, Robin; Hughes, Scott; Lang, Ryan
2007-01-01
The most useful characterization of a gravitational wave detector's performance is the accuracy with which astrophysical parameters of potential gravitational wave sources can be estimated. One of the most important source types for the Laser Interferometer Space Antenna (LISA) is inspiraling binaries of black holes. LISA can measure mass and spin to better than 1% for a wide range of masses, even out to high redshifts. The most difficult parameter to estimate accurately is almost always luminosity distance. Nonetheless, LISA can measure luminosity distance of intermediate-mass black hole binary systems (total mass approx.10(exp 4) solar mass) out to z approx.10 with distance accuracies approaching 25% in many cases. With this performance, LISA will be able to follow the merger history of black holes from the earliest mergers of proto-galaxies to the present. LISA's performance as a function of mass from 1 to 10(exp 7) solar mass and of redshift out to z approx. 30 will be described. The re-formulation of LISA's science requirements based on an instrument sensitivity model and parameter estimation will be described.
An alternative respiratory sounds classification system utilizing artificial neural networks.
Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen
2015-01-01
Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.
Optimization of IBF parameters based on adaptive tool-path algorithm
NASA Astrophysics Data System (ADS)
Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi
2018-03-01
As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.
Four years of Landsat-7 on-orbit geometric calibration and performance
Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.
2004-01-01
Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
System identification for modeling for control of flexible structures
NASA Technical Reports Server (NTRS)
Mettler, Edward; Milman, Mark
1986-01-01
The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.
Karamitsos, Theodoros D; Hudsmith, Lucy E; Selvanayagam, Joseph B; Neubauer, Stefan; Francis, Jane M
2007-01-01
Accurate and reproducible measurement of left ventricular (LV) mass and function is a significant strength of Cardiovascular Magnetic Resonance (CMR). Reproducibility and accuracy of these measurements is usually reported between experienced operators. However, an increasing number of inexperienced operators are now training in CMR and are involved in post-processing analysis. The aim of the study was to assess the interobserver variability of the manual planimetry of LV contours amongst two experienced and six inexperienced operators before and after a two months training period. Ten healthy normal volunteers (5 men, mean age 34+/-14 years) comprised the study population. LV volumes, mass, and ejection fraction were manually evaluated using Argus software (Siemens Medical Solutions, Erlangen, Germany) for each subject, once by the two experienced and twice by the six inexperienced operators. The mean values of experienced operators were considered the reference values. The agreement between operators was evaluated by means of Bland-Altman analysis. Training involved standardized data acquisition, simulated off-line analysis and mentoring. The trainee operators demonstrated improvement in the measurement of all the parameters compared to the experienced operators. The mean ejection fraction variability improved from 7.2% before training to 3.7% after training (p=0.03). The parameter in which the trainees showed the least improvement was LV mass (from 7.7% to 6.7% after training). The basal slice selection and contour definition were the main sources of errors. An intensive two month training period significantly improved the accuracy of LV functional measurements. Adequate training of new CMR operators is of paramount importance in our aim to maintain the accuracy and high reproducibility of CMR in LV function analysis.
Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.
Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2012-11-08
A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.
Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera
Sim, Sungdae; Sock, Juil; Kwak, Kiho
2016-01-01
LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416
Bellucci, Michael A; Coker, David F
2011-07-28
We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics
Submillimeter and Far-Infrared Dielectric Properties of Thin Films
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Wollack, Edward J.
2016-01-01
The complex dielectric function enables the study of a material's refractive and absorptive properties and provides information on a material's potential for practical application. Commonly employed line shape profile functions from the literature are briefly surveyed and their suitability for representation of dielectric material properties are discussed. An analysis approach to derive a material's complex dielectric function from observed transmittance spectra in the far-infrared and submillimeter regimes is presented. The underlying model employed satisfies the requirements set by the Kramers-Kronig relations. The dielectric function parameters derived from this approach typically reproduce the observed transmittance spectra with an accuracy of less than 4%.
C -parameter distribution at N 3 LL ' including power corrections
Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; ...
2015-05-15
We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α 3 s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ω n. To eliminate an O(Λ QCD) renormalon ambiguity in themore » soft function, we switch from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω 1. We show how to simultaneously account for running effects in Ω 1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(m Z) and Ω 1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=m Z.« less
The accuracy of assessment of walking distance in the elective spinal outpatients setting.
Okoro, Tosan; Qureshi, Assad; Sell, Beulah; Sell, Philip
2010-02-01
Self reported walking distance is a clinically relevant measure of function. The aim of this study was to define patient accuracy and understand factors that might influence perceived walking distance in an elective spinal outpatients setting. A prospective cohort study. 103 patients were asked to perform one test of distance estimation and 2 tests of functional distance perception using pre-measured landmarks. Standard spine specific outcomes included the patient reported claudication distance, Oswestry disability index (ODI), Low Back Outcome Score (LBOS), visual analogue score (VAS) for leg and back, and other measures. There are over-estimators and under-estimators. Overall, the accuracy to within 9.14 metres (m) (10 yards) was poor at only 5% for distance estimation and 40% for the two tests of functional distance perception. Distance: Actual distance 111 m; mean response 245 m (95% CI 176.3-314.7), Functional test 1 actual distance 29.2 m; mean response 71.7 m (95% CI 53.6-88.9) Functional test 2 actual distance 19.6 m; mean response 47.4 m (95% CI 35.02-59.95). Surprisingly patients over 60 years of age (n = 43) are twice as accurate with each test performed compared to those under 60 (n = 60) (average 70% overestimation compared to 140%; p = 0.06). Patients in social class I (n = 18) were more accurate than those in classes II-V (n = 85): There was a positive correlation between poor accuracy and increasing MZD (Pearson's correlation coefficient 0.250; p = 0.012). ODI, LBOS and other parameters measured showed no correlation. Subjective distance perception and estimation is poor in this population. Patients over 60 and those with a professional background are more accurate but still poor.
Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks
Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444
Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
NASA Astrophysics Data System (ADS)
Tonbul, H.; Kavzoglu, T.
2016-12-01
In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1976-01-01
In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1984-05-01
Featuring the modified hypernetted-chain (MHNC) scheme as a variational fitting procedure, we demonstrate that the accuracy of the variational perturbation theory (VPT) and of the method based on additivity of equations of state is determined by the excess entropy dependence of the bridge-function parameters [i.e., η(s) when the Percus-Yevick hard-sphere bridge functions are employed]. It is found that η(s) is nearly universal for all soft (i.e., "physical") potentials while it is distinctly different for the hard spheres, providing a graphical display of the "jump" in pair-potential space (with respect to accuracy of VPT) from "hard" to "soft" behavior. The universality of η(s) provides a local criterion for the MHNC scheme that should be useful for inverting structure-factor data in order to obtain the potential. An alternative local MHNC criterion due to Lado is rederived and extended, and it is also analyzed in light of the plot of η(s).
Zhou, Shiqi; Jamnik, Andrej
2005-09-22
The structure of a Lennard-Jones (LJ) fluid subjected to diverse external fields maintaining the equilibrium with the bulk LJ fluid is studied on the basis of the third-order+second-order perturbation density-functional approximation (DFA). The chosen density and potential parameters for the bulk fluid correspond to the conditions situated at "dangerous" regions of the phase diagram, i.e., near the critical temperature or close to the gas-liquid coexistence curve. The accuracy of DFA predictions is tested against the results of a grand canonical ensemble Monte Carlo simulation. It is found that the DFA theory presented in this work performs successfully for the nonuniform LJ fluid only on the condition of high accuracy of the required bulk second-order direct correlation function. The present report further indicates that the proposed perturbation DFA is efficient and suitable for both supercritical and subcritical temperatures.
Progress in calculating the potential energy surface of H3+.
Adamowicz, Ludwik; Pavanello, Michele
2012-11-13
The most accurate electronic structure calculations are performed using wave function expansions in terms of basis functions explicitly dependent on the inter-electron distances. In our recent work, we use such basis functions to calculate a highly accurate potential energy surface (PES) for the H(3)(+) ion. The functions are explicitly correlated Gaussians, which include inter-electron distances in the exponent. Key to obtaining the high accuracy in the calculations has been the use of the analytical energy gradient determined with respect to the Gaussian exponential parameters in the minimization of the Rayleigh-Ritz variational energy functional. The effective elimination of linear dependences between the basis functions and the automatic adjustment of the positions of the Gaussian centres to the changing molecular geometry of the system are the keys to the success of the computational procedure. After adiabatic and relativistic corrections are added to the PES and with an effective accounting of the non-adiabatic effects in the calculation of the rotational/vibrational states, the experimental H(3)(+) rovibrational spectrum is reproduced at the 0.1 cm(-1) accuracy level up to 16,600 cm(-1) above the ground state.
NASA Astrophysics Data System (ADS)
Nakano, Hayato; Hakoyama, Tomoyuki; Kuwabara, Toshihiko
2017-10-01
Hole expansion forming of a cold rolled steel sheet is investigated both experimentally and analytically to clarify the effects of material models on the predictive accuracy of finite element analyses (FEA). The multiaxial plastic deformation behavior of a cold rolled steel sheet with a thickness of 1.2 mm was measured using a servo-controlled multiaxial tube expansion testing machine for the range of strain from initial yield to fracture. Tubular specimens were fabricated from the sheet sample by roller bending and laser welding. Many linear stress paths in the first quadrant of stress space were applied to the tubular specimens to measure the contours of plastic work in stress space up to a reference plastic strain of 0.24 along with the directions of plastic strain rates. The anisotropic parameters and exponent of the Yld2000-2d yield function (Barlat et al., 2003) were optimized to approximate the contours of plastic work and the directions of plastic strain rates. The hole expansion forming simulations were performed using the different model identifications based on the Yld2000-2d yield function. It is concluded that the yield function best capturing both the plastic work contours and the directions of plastic strain rates leads to the most accurate predicted FEA.
Fast and accurate 3D tensor calculation of the Fock operator in a general basis
NASA Astrophysics Data System (ADS)
Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.
2012-11-01
The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.
Information filtering via a scaling-based function.
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.
A new method to calculate the beam charge for an integrating current transformer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Yuchi; Han Dan; Zhu Bin
2012-09-15
The integrating current transformer (ICT) is a magnetic sensor widely used to precisely measure the charge of an ultra-short-pulse charged particle beam generated by traditional accelerators and new laser-plasma particle accelerators. In this paper, we present a new method to calculate the beam charge in an ICT based on circuit analysis. The output transfer function shows an invariable signal profile for an ultra-short electron bunch, so the function can be used to evaluate the signal quality and calculate the beam charge through signal fitting. We obtain a set of parameters in the output function from a standard signal generated bymore » an ultra-short electron bunch (about 1 ps in duration) at a radio frequency linear electron accelerator at Tsinghua University. These parameters can be used to obtain the beam charge by signal fitting with excellent accuracy.« less
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
NASA Astrophysics Data System (ADS)
Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang
2018-03-01
This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
Sex estimation from the patella in an African American population.
Peckmann, Tanya R; Fisher, Brooke
2018-02-01
The skull and pelvis have been used for the estimation of sex for unknown human remains. However, in forensic cases where skeletal remains often exhibit postmortem damage and taphonomic changes the patella may be used for the estimation of sex as it is a preservationally favoured bone. The goal of the present research was to derive discriminant function equations from the patella for estimation of sex from an historic African American population. Six parameters were measured on 200 individuals (100 males and 100 females), ranging in age from 20 to 80 years old, from the Robert J. Terry Anatomical Skeleton Collection. The statistical analyses showed that all variables were sexually dimorphic. Discriminant function score equations were generated for use in sex estimation. The overall accuracy of sex classification ranged from 80.0% to 85.0% for the direct method and 80.0%-84.5% for the stepwise method. Overall, when the Spanish and Black South African discriminant functions were applied to the African American population they showed low accuracy rates for sexing the African American sample. However, when the White South African discriminant functions were applied to the African American sample they displayed high accuracy rates for sexing the African American population. The patella was shown to be accurate for sex estimation in the historic African American population. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahms, Rainer N.
2014-12-31
The fidelity of Gradient Theory simulations depends on the accuracy of saturation properties and influence parameters, and require equations of state (EoS) which exhibit a fundamentally consistent behavior in the two-phase regime. Widely applied multi-parameter EoS, however, are generally invalid inside this region. Hence, they may not be fully suitable for application in concert with Gradient Theory despite their ability to accurately predict saturation properties. The commonly assumed temperature-dependence of pure component influence parameters usually restricts their validity to subcritical temperature regimes. This may distort predictions for general multi-component interfaces where temperatures often exceed the critical temperature of vapor phasemore » components. Then, the calculation of influence parameters is not well defined. In this paper, one of the first studies is presented in which Gradient Theory is combined with a next-generation Helmholtz energy EoS which facilitates fundamentally consistent calculations over the entire two-phase regime. Illustrated on pentafluoroethane as an example, reference simulations using this method are performed. They demonstrate the significance of such high-accuracy and fundamentally consistent calculations for the computation of interfacial properties. These reference simulations are compared to corresponding results from cubic PR EoS, widely-applied in combination with Gradient Theory, and mBWR EoS. The analysis reveals that neither of those two methods succeeds to consistently capture the qualitative distribution of obtained key thermodynamic properties in Gradient Theory. Furthermore, a generalized expression of the pure component influence parameter is presented. This development is informed by its fundamental definition based on the direct correlation function of the homogeneous fluid and by presented high-fidelity simulations of interfacial density profiles. As a result, the new model preserves the accuracy of previous temperature-dependent expressions, remains well-defined at supercritical temperatures, and is fully suitable for calculations of general multi-component two-phase interfaces.« less
The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo
2017-02-01
The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.
Infrared Dielectric Properties of Low-stress Silicon Nitride
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Beall, James A.; Cho, Hsiao-Mei; McAndrew, Brendan; Niemack, Michael D.; Wollack, Edward J.
2012-01-01
Silicon nitride thin films play an important role in the realization of sensors, filters, and high-performance circuits. Estimates of the dielectric function in the far- and mid-IR regime are derived from the observed transmittance spectra for a commonly employed low-stress silicon nitride formulation. The experimental, modeling, and numerical methods used to extract the dielectric parameters with an accuracy of approximately 4% are presented.
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
Optimal hemodynamic response model for functional near-infrared spectroscopy.
Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).
An effective algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2012-08-01
Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Catalogue identifier: AEMC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: 0⩽x⩽1 and 0⩽ω⩽1, which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at 10-10, the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function.
The analytical design of spectral measurements for multispectral remote sensor systems
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)
1979-01-01
The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.
Neural network approach for the calculation of potential coefficients in quantum mechanics
NASA Astrophysics Data System (ADS)
Ossandón, Sebastián; Reyes, Camilo; Cumsille, Patricio; Reyes, Carlos M.
2017-05-01
A numerical method based on artificial neural networks is used to solve the inverse Schrödinger equation for a multi-parameter class of potentials. First, the finite element method was used to solve repeatedly the direct problem for different parametrizations of the chosen potential function. Then, using the attainable eigenvalues as a training set of the direct radial basis neural network a map of new eigenvalues was obtained. This relationship was later inverted and refined by training an inverse radial basis neural network, allowing the calculation of the unknown parameters and therefore estimating the potential function. Three numerical examples are presented in order to prove the effectiveness of the method. The results show that the method proposed has the advantage to use less computational resources without a significant accuracy loss.
Quantum Monte Carlo for atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less
Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry
NASA Astrophysics Data System (ADS)
Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki
2015-08-01
In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.
Yu, Jue; Zhuang, Jian; Yu, Dehong
2015-01-01
This paper concerns a state feedback integral control using a Lyapunov function approach for a rotary direct drive servo valve (RDDV) while considering parameter uncertainties. Modeling of this RDDV servovalve reveals that its mechanical performance is deeply influenced by friction torques and flow torques; however, these torques are uncertain and mutable due to the nature of fluid flow. To eliminate load resistance and to achieve satisfactory position responses, this paper develops a state feedback control that integrates an integral action and a Lyapunov function. The integral action is introduced to address the nonzero steady-state error; in particular, the Lyapunov function is employed to improve control robustness by adjusting the varying parameters within their value ranges. This new controller also has the advantages of simple structure and ease of implementation. Simulation and experimental results demonstrate that the proposed controller can achieve higher control accuracy and stronger robustness. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
NASA Astrophysics Data System (ADS)
Vishwakarma, Vinod
Modified Modal Domain Analysis (MMDA) is a novel method for the development of a reduced-order model (ROM) of a bladed rotor. This method utilizes proper orthogonal decomposition (POD) of Coordinate Measurement Machine (CMM) data of blades' geometries and sector analyses using ANSYS. For the first time ROM of a geometrically mistuned industrial scale rotor (Transonic rotor) with large size of Finite Element (FE) model is generated using MMDA. Two methods for estimating mass and stiffness mistuning matrices are used a) exact computation from sector FE analysis, b) estimates based on POD mistuning parameters. Modal characteristics such as mistuned natural frequencies, mode shapes and forced harmonic response are obtained from ROM for various cases, and results are compared with full rotor ANSYS analysis and other ROM methods such as Subset of Nominal Modes (SNM) and Fundamental Model of Mistuning (FMM). Accuracy of MMDA ROM is demonstrated with variations in number of POD features and geometric mistuning parameters. It is shown for the aforementioned case b) that the high accuracy of ROM studied in previous work with Academic rotor does not directly translate to the Transonic rotor. Reasons for such mismatch in results are investigated and attributed to higher mistuning in Transonic rotor. Alternate solutions such as estimation of sensitivities via least squares, and interpolation of mass and stiffness matrices on manifolds are developed, and their results are discussed. Statistics such as mean and standard deviations of forced harmonic response peak amplitude are obtained from random permutations, and are shown to have similar results as those of Monte Carlo simulations. These statistics are obtained and compared for 3 degree of freedom (DOF) lumped parameter model (LPM) of rotor, Academic rotor and Transonic rotor. A state -- estimator based on MMDA ROM and Kalman filter is also developed for offline or online estimation of harmonic forcing function from measurements of forced response. Forcing function is estimated for synchronous excitation of 3DOF rotor model, Academic rotor and Transonic rotor from measurement of response at few nodes. For asynchronous excitation forcing function is estimated only for 3DOF rotor model and Academic rotor from measurement of response. The impact of number of measurement locations and accuracy of ROM on the estimation of forcing function is discussed. iv.
Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge
2014-12-01
Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.
The H,G_1,G_2 photometric system with scarce observational data
NASA Astrophysics Data System (ADS)
Penttilä, A.; Granvik, M.; Muinonen, K.; Wilkman, O.
2014-07-01
The H,G_1,G_2 photometric system was officially adopted at the IAU General Assembly in Beijing, 2012. The system replaced the H,G system from 1985. The 'photometric system' is a parametrized model V(α; params) for the magnitude-phase relation of small Solar System bodies, and the main purpose is to predict the magnitude at backscattering, H := V(0°), i.e., the (absolute) magnitude of the object. The original H,G system was designed using the best available data in 1985, but since then new observations have been made showing certain features, especially near backscattering, to which the H,G function has troubles adjusting to. The H,G_1,G_2 system was developed especially to address these issues [1]. With a sufficient number of high-accuracy observations and with a wide phase-angle coverage, the H,G_1,G_2 system performs well. However, with scarce low-accuracy data the system has troubles producing a reliable fit, as would any other three-parameter nonlinear function. Therefore, simultaneously with the H,G_1,G_2 system, a two-parameter version of the model, the H,G_{12} system, was introduced [1]. The two-parameter version ties the parameters G_1,G_2 into a single parameter G_{12} by a linear relation, and still uses the H,G_1,G_2 system in the background. This version dramatically improves the possibility to receive a reliable phase-curve fit to scarce data. The amount of observed small bodies is increasing all the time, and so is the need to produce estimates for the absolute magnitude/diameter/albedo and other size/composition related parameters. The lack of small-phase-angle observations is especially topical for near-Earth objects (NEOs). With these, even the two- parameter version faces problems. The previous procedure with the H,G system in such circumstances has been that the G-parameter has been fixed to some constant value, thus only fitting a single-parameter function. In conclusion, there is a definitive need for a reliable procedure to produce photometric fits to very scarce and low-accuracy data. There are a few details that should be considered with the H,G_1,G_2 or H,G_{12} systems with scarce data. The first point is the distribution of errors in the fit. The original H,G system allowed linear regression in the flux space, thus making the estimation computationally easier. The same principle was repeated with the H,G_1,G_2 system. There is, however, a major hidden assumption in the transformation. With regression modeling, the residuals should be distributed symmetrically around zero. If they are normally distributed, even better. We have noticed that, at least with some NEO observations, the residuals in the flux space are far from symmetric, and seem to be much more symmetric in the magnitude space. The result is that the nonlinear fit in magnitude space is far more reliable than the linear fit in the flux space. Since the computers and nonlinear regression algorithms are efficient enough, we conclude that, in many cases, with low-accuracy data the nonlinear fit should be favored. In fact, there are statistical procedures that should be employed with the photometric fit. At the moment, the choice between the three-parameter and two-parameter versions is simply based on subjective decision-making. By checking parameter error and model comparison statistics, the choice could be done objectively. Similarly, the choice between the linear fit in flux space and the nonlinear fit in magnitude space should be based on a statistical test of unbiased residuals. Furthermore, the so-called Box-Cox transform could be employed to find an optimal transformation somewhere between the magnitude and flux spaces. The H,G_1,G_2 system is based on cubic splines, and is therefore a bit more complicated to implement than a system with simpler basis functions. The same applies to a complete program that would automatically choose the best transforms to data, test if two- or three-parameter version of the model should be fitted, and produce the fitted parameters with their error estimates. Our group has already made implementations of the H,G_1,G_2 system publicly available [2]. We plan to implement the abovementioned improvements to the system and make also these tools public.
Parameter estimation accuracies of Galactic binaries with eLISA
NASA Astrophysics Data System (ADS)
Błaut, Arkadiusz
2018-09-01
We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.
Hartman, Joshua D; Day, Graeme M; Beran, Gregory J O
2016-11-02
Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13 C and 15 N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Finally, these results also provide compelling evidence for the transferability of the linear regression parameters mapping predicted chemical shieldings to chemical shifts that were derived in an earlier study.
2016-01-01
Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13C and 15N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Finally, these results also provide compelling evidence for the transferability of the linear regression parameters mapping predicted chemical shieldings to chemical shifts that were derived in an earlier study. PMID:27829821
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
NASA Astrophysics Data System (ADS)
Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido
2016-04-01
In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.
Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System
NASA Technical Reports Server (NTRS)
Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.
2013-01-01
Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.
Djioua, Moussa; Plamondon, Réjean
2009-11-01
In this paper, we present a new analytical method for estimating the parameters of Delta-Lognormal functions and characterizing handwriting strokes. According to the Kinematic Theory of rapid human movements, these parameters contain information on both the motor commands and the timing properties of a neuromuscular system. The new algorithm, called XZERO, exploits relationships between the zero crossings of the first and second time derivatives of a lognormal function and its four basic parameters. The methodology is described and then evaluated under various testing conditions. The new tool allows a greater variety of stroke patterns to be processed automatically. Furthermore, for the first time, the extraction accuracy is quantified empirically, taking advantage of the exponential relationships that link the dispersion of the extraction errors with its signal-to-noise ratio. A new extraction system which combines this algorithm with two other previously published methods is also described and evaluated. This system provides researchers involved in various domains of pattern analysis and artificial intelligence with new tools for the basic study of single strokes as primitives for understanding rapid human movements.
Foundations for Measuring Volume Rendering Quality
NASA Technical Reports Server (NTRS)
Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.
Tracer techniques for urine volume determination and urine collection and sampling back-up system
NASA Technical Reports Server (NTRS)
Ramirez, R. V.
1971-01-01
The feasibility, functionality, and overall accuracy of the use of lithium were investigated as a chemical tracer in urine for providing a means of indirect determination of total urine volume by the atomic absorption spectrophotometry method. Experiments were conducted to investigate the parameters of instrumentation, tracer concentration, mixing times, and methods for incorporating the tracer material in the urine collection bag, and to refine and optimize the urine tracer technique to comply with the Skylab scheme and operational parameters of + or - 2% of volume error and + or - 1% accuracy of amount of tracer added to each container. In addition, a back-up method for urine collection and sampling system was developed and evaluated. This back-up method incorporates the tracer technique for volume determination in event of failure of the primary urine collection and preservation system. One chemical preservative was selected and evaluated as a contingency chemical preservative for the storage of urine in event of failure of the urine cooling system.
QSAR Study for Carcinogenic Potency of Aromatic Amines Based on GEP and MLPs
Song, Fucheng; Zhang, Anling; Liang, Hui; Cui, Lianhua; Li, Wenlian; Si, Hongzong; Duan, Yunbo; Zhai, Honglin
2016-01-01
A new analysis strategy was used to classify the carcinogenicity of aromatic amines. The physical-chemical parameters are closely related to the carcinogenicity of compounds. Quantitative structure activity relationship (QSAR) is a method of predicting the carcinogenicity of aromatic amine, which can reveal the relationship between carcinogenicity and physical-chemical parameters. This study accessed gene expression programming by APS software, the multilayer perceptrons by Weka software to predict the carcinogenicity of aromatic amines, respectively. All these methods relied on molecular descriptors calculated by CODESSA software and eight molecular descriptors were selected to build function equations. As a remarkable result, the accuracy of gene expression programming in training and test sets are 0.92 and 0.82, the accuracy of multilayer perceptrons in training and test sets are 0.84 and 0.74 respectively. The precision of the gene expression programming is obviously superior to multilayer perceptrons both in training set and test set. The QSAR application in the identification of carcinogenic compounds is a high efficiency method. PMID:27854309
Speed accuracy trade-off under response deadlines
Karşılar, Hakan; Simen, Patrick; Papadakis, Samantha; Balcı, Fuat
2014-01-01
Perceptual decision making has been successfully modeled as a process of evidence accumulation up to a threshold. In order to maximize the rewards earned for correct responses in tasks with response deadlines, participants should collapse decision thresholds dynamically during each trial so that a decision is reached before the deadline. This strategy ensures on-time responding, though at the cost of reduced accuracy, since slower decisions are based on lower thresholds and less net evidence later in a trial (compared to a constant threshold). Frazier and Yu (2008) showed that the normative rate of threshold reduction depends on deadline delays and on participants' uncertainty about these delays. Participants should start collapsing decision thresholds earlier when making decisions under shorter deadlines (for a given level of timing uncertainty) or when timing uncertainty is higher (for a given deadline). We tested these predictions using human participants in a random dot motion discrimination task. Each participant was tested in free-response, short deadline (800 ms), and long deadline conditions (1000 ms). Contrary to optimal-performance predictions, the resulting empirical function relating accuracy to response time (RT) in deadline conditions did not decline to chance level near the deadline; nor did the slight decline we typically observed relate to measures of endogenous timing uncertainty. Further, although this function did decline slightly with increasing RT, the decline was explainable by the best-fitting parameterization of Ratcliff's diffusion model (Ratcliff, 1978), whose parameters are constant within trials. Our findings suggest that at the very least, typical decision durations are too short for participants to adapt decision parameters within trials. PMID:25177265
Jorge, Antônio José Lagoeiro; Ribeiro, Mario Luiz; Rosa, Maria Luiza Garcia; Licio, Fernanda Volponi; Fernandes, Luiz Cláudio Maluhy; Lanzieri, Pedro Gemal; Jorge, Bruno Afonso Lagoeiro; Brito, Flavia Oliveira Xavier; Mesquita, Evandro Tinoco
2012-02-01
The pathophysiological model of heart failure (HF) with preserved ejection fraction (HFPEF) focuses on the presence of diastolic dysfunction, which causes left atrial (LA) structural and functional changes. The LA size, an indicator of the chronic elevation of the left ventricular (LV) filling pressure, can be used as a marker of the presence of HFPEF, and it is easily obtained. To estimate the accuracy of measuring the LA size by using indexed LA volume and diameter (ILAV and ILAD, respectively) for diagnosing HFPEF in ambulatory patients. This study assessed 142 patients (mean age, 67.3 ± 11.4 years; 75% of the female sex) suspected of having HF, divided into two groups: with HFPEF (n = 35) and without HFPEF (n = 107). The diastolic function, assessed by use of Doppler echocardiography, showed a significant difference between the groups regarding the parameters assessing ventricular relaxation (E': 6.9 ± 2.0 cm/s vs. 9.3 ± 2.5 cm/s; p < 0.0001) and LV filling pressure (E/E' ratio: 15.2 ± 6.4 vs. 7.6 ± 2.2; p < 0.0001). The ILAV cutoff point of 35 mL/m² best correlated with the diagnosis of HFPEF, showing sensitivity, specificity, and accuracy of 83%. The ILAD cutoff point of 2.4 cm/m² showed sensitivity of 71%, specificity of 66%, and accuracy of 67%. For diagnosing HFPEF in ambulatory patients, the ILAV proved to be a more accurate parameter than ILAD. On echocardiographic assessment, ILAV, rather than ILAD, should be routinely measured.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2015-01-01
Variable-Domain Displacement Transfer Functions were formulated for shape predictions of complex wing structures, for which surface strain-sensing stations must be properly distributed to avoid jointed junctures, and must be increased in the high strain gradient region. Each embedded beam (depth-wise cross section of structure along a surface strain-sensing line) was discretized into small variable domains. Thus, the surface strain distribution can be described with a piecewise linear or a piecewise nonlinear function. Through discretization, the embedded beam curvature equation can be piece-wisely integrated to obtain the Variable-Domain Displacement Transfer Functions (for each embedded beam), which are expressed in terms of geometrical parameters of the embedded beam and the surface strains along the strain-sensing line. By inputting the surface strain data into the Displacement Transfer Functions, slopes and deflections along each embedded beam can be calculated for mapping out overall structural deformed shapes. A long tapered cantilever tubular beam was chosen for shape prediction analysis. The input surface strains were analytically generated from finite-element analysis. The shape prediction accuracies of the Variable- Domain Displacement Transfer Functions were then determined in light of the finite-element generated slopes and deflections, and were fofound to be comparable to the accuracies of the constant-domain Displacement Transfer Functions
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
Mitchell, Peter J; Klarskov, Niels; Telford, Karen J; Hosker, Gordon L; Lose, Gunnar; Kiff, Edward S
2012-02-01
Anal acoustic reflectometry is a new reproducible technique that allows a viscoelastic assessment of anal canal function. Five new variables reflecting anal canal function are measured: the opening and closing pressure, opening and closing elastance, and hysteresis. The aim of this study was to assess whether the parameters measured in anal acoustic reflectometry are clinically valid between continent and fecally incontinent subjects. This was an age- and sex-matched study of continent and incontinent women. The study was conducted at a university teaching hospital. One hundred women (50 with fecal incontinence and 50 with normal bowel control) were included in the study. Subjects were age matched to within 5 years. Parameters measured with anal acoustic reflectometry and manometry were compared between incontinent and continent groups using a paired t test. Diagnostic accuracy was assessed by the use of receiver operator characteristic curves. Four of the 5 anal acoustic reflectometry parameters at rest were significantly different between continent and incontinent women (eg, opening pressure in fecally incontinent subjects was 31.6 vs 51.5 cm H2O in continent subjects, p = 0.0001). Both anal acoustic reflectometry parameters of squeeze opening pressure and squeeze opening elastance were significantly reduced in the incontinent women compared with continent women (50 vs 99.1 cm H2O, p = 0.0001 and 1.48 vs 1.83 cm H2O/mm, p = 0.012). In terms of diagnostic accuracy, opening pressure at rest measured by reflectometry was significantly superior in discriminating between continent and incontinent women in comparison with resting pressure measured with manometry (p = 0.009). Anal acoustic reflectometry is a new, clinically valid technique in the assessment of continent and incontinent subjects. This technique, which assesses the response of the anal canal to distension and relaxation, provides a detailed viscoelastic assessment of anal canal function. This technique may not only aid the investigation of fecally incontinent subjects, but it may also improve our understanding of anal canal physiology during both the process of defecation and maintenance of continence.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2006-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of lK, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze Atmospheric InfraRed Sounder/Advanced Microwave Sounding Unit/Humidity Sounder Brazil (AIRS/AMSU/HSB) data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small and the RMS accuracy of lower tropospheric temperature retrieved with 80 percent cloud cover is about 0.5 K poorer than for clear cases. HSB failed in February 2003, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC (Distributed Active Archive Center) in April 2003 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
Implementation and application of an interactive user-friendly validation software for RADIANCE
NASA Astrophysics Data System (ADS)
Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.
2012-02-01
RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.
Rizzo, Gaia; Raffeiner, Bernd; Coran, Alessandro; Ciprian, Luca; Fiocco, Ugo; Botsios, Costantino; Stramare, Roberto; Grisan, Enrico
2015-07-01
Inflammatory rheumatic diseases are the leading causes of disability and constitute a frequent medical disorder, leading to inability to work, high comorbidity, and increased mortality. The standard for diagnosing and differentiating arthritis is based on clinical examination, laboratory exams, and imaging findings, such as synovitis, bone edema, or joint erosions. Contrast-enhanced ultrasound (CEUS) examination of the small joints is emerging as a sensitive tool for assessing vascularization and disease activity. Quantitative assessment is mostly performed at the region of interest level, where the mean intensity curve is fitted with an exponential function. We showed that using a more physiologically motivated perfusion curve, and by estimating the kinetic parameters separately pixel by pixel, the quantitative information gathered is able to more effectively characterize the different perfusion patterns. In particular, we demonstrated that a random forest classifier based on pixelwise quantification of the kinetic contrast agent perfusion features can discriminate rheumatoid arthritis from different arthritis forms (psoriatic arthritis, spondyloarthritis, and arthritis in connective tissue disease) with an average accuracy of 97%. On the contrary, clinical evaluation (DAS28), semiquantitative CEUS assessment, serological markers, or region-based parameters do not allow such a high diagnostic accuracy.
Rizzo, Gaia; Raffeiner, Bernd; Coran, Alessandro; Ciprian, Luca; Fiocco, Ugo; Botsios, Costantino; Stramare, Roberto; Grisan, Enrico
2015-01-01
Abstract. Inflammatory rheumatic diseases are the leading causes of disability and constitute a frequent medical disorder, leading to inability to work, high comorbidity, and increased mortality. The standard for diagnosing and differentiating arthritis is based on clinical examination, laboratory exams, and imaging findings, such as synovitis, bone edema, or joint erosions. Contrast-enhanced ultrasound (CEUS) examination of the small joints is emerging as a sensitive tool for assessing vascularization and disease activity. Quantitative assessment is mostly performed at the region of interest level, where the mean intensity curve is fitted with an exponential function. We showed that using a more physiologically motivated perfusion curve, and by estimating the kinetic parameters separately pixel by pixel, the quantitative information gathered is able to more effectively characterize the different perfusion patterns. In particular, we demonstrated that a random forest classifier based on pixelwise quantification of the kinetic contrast agent perfusion features can discriminate rheumatoid arthritis from different arthritis forms (psoriatic arthritis, spondyloarthritis, and arthritis in connective tissue disease) with an average accuracy of 97%. On the contrary, clinical evaluation (DAS28), semiquantitative CEUS assessment, serological markers, or region-based parameters do not allow such a high diagnostic accuracy. PMID:27014713
Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael
2011-01-01
With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Effect of Target Location on Dynamic Visual Acuity During Passive Horizontal Rotation
NASA Technical Reports Server (NTRS)
Appelbaum, Meghan; DeDios, Yiri; Kulecz, Walter; Peters, Brian; Wood, Scott
2010-01-01
The vestibulo-ocular reflex (VOR) generates eye rotation to compensate for potential retinal slip in the specific plane of head movement. Dynamic visual acuity (DVA) has been utilized as a functional measure of the VOR. The purpose of this study was to examine changes in accuracy and reaction time when performing a DVA task with targets offset from the plane of rotation, e.g. offset vertically during horizontal rotation. Visual acuity was measured in 12 healthy subjects as they moved a hand-held joystick to indicate the orientation of a computer-generated Landolt C "as quickly and accurately as possible." Acuity thresholds were established with optotypes presented centrally on a wall-mounted LCD screen at 1.3 m distance, first without motion (static condition) and then while oscillating at 0.8 Hz (DVA, peak velocity 60 deg/s). The effect of target location was then measured during horizontal rotation with the optotypes randomly presented in one of nine different locations on the screen (offset up to 10 deg). The optotype size (logMar 0, 0.2 or 0.4, corresponding to Snellen range 20/20 to 20/50) and presentation duration (150, 300 and 450 ms) were counter-balanced across five trials, each utilizing horizontal rotation at 0.8 Hz. Dynamic acuity was reduced relative to static acuity in 7 of 12 subjects by one step size. During the random target trials, both accuracy and reaction time improved proportional to optotype size. Accuracy and reaction time also improved between 150 ms and 300 ms presentation durations. The main finding was that both accuracy and reaction time varied as a function of target location, with greater performance decrements when acquiring vertical targets. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of motion. Both reaction time and accuracy are functionally relevant DVA parameters of VOR function.
Research on natural frequency based on modal test for high speed vehicles
NASA Astrophysics Data System (ADS)
Ma, Guangsong; He, Guanglin; Guo, Yachao
2018-04-01
High speed vehicle as a vibration system, resonance generated in flight may be harmful to high speed vehicles. It is possible to solve the resonance problem by acquiring the natural frequency of the high-speed aircraft and then taking some measures to avoid the natural frequency of the high speed vehicle. Therefore, In this paper, the modal test of the high speed vehicle was carried out by using the running hammer method and the PolyMAX modal parameter identification method. Firstly, the total frequency response function, coherence function of the high speed vehicle are obtained by the running hammer stimulation test, and through the modal assurance criterion (MAC) to determine the accuracy of the estimated parameters. Secondly, the first three order frequencies, the pole steady state diagram of the high speed vehicles is obtained by the PolyMAX modal parameter identification method. At last, the natural frequency of the vibration system was accurately obtained by the running hammer method.
Zhang, Z; Jewett, D L
1994-01-01
Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.
Reducing trial length in force platform posturographic sleep deprivation measurements
NASA Astrophysics Data System (ADS)
Forsman, P.; Hæggström, E.; Wallin, A.
2007-09-01
Sleepiness correlates with sleep-related accidents, but convenient tests for sleepiness monitoring are scarce. The posturographic test is a method to assess balance, and this paper describes one phase of the development of a posturographic sleepiness monitoring method. We investigated the relationship between trial length and accuracy of the posturographic time-awake (TA) estimate. Twenty-one healthy adults were kept awake for 32 h and their balance was recorded, 16 times with 30 s trials, as a function of TA. The balance was analysed with regards to fractal dimension, most common sway amplitude and time interval for open-loop stance control. While a 30 s trial allows estimating the TA of individual subjects with better than 5 h accuracy, repeating the analysis using shorter trial lengths showed that 18 s sufficed to achieve the targeted 5 h accuracy. Moreover, it was found that with increasing TA, the posturographic parameters estimated the subjects' TA more accurately.
NASA Astrophysics Data System (ADS)
Ali, Abebe Mohammed; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Duren, Iris van; Heiden, Uta; Heurich, Marco
2016-03-01
Assessments of ecosystem functioning rely heavily on quantification of vegetation properties. The search is on for methods that produce reliable and accurate baseline information on plant functional traits. In this study, the inversion of the PROSPECT radiative transfer model was used to estimate two functional leaf traits: leaf dry matter content (LDMC) and specific leaf area (SLA). Inversion of PROSPECT usually aims at quantifying its direct input parameters. This is the first time the technique has been used to indirectly model LDMC and SLA. Biophysical parameters of 137 leaf samples were measured in July 2013 in the Bavarian Forest National Park, Germany. Spectra of the leaf samples were measured using an ASD FieldSpec3 equipped with an integrating sphere. PROSPECT was inverted using a look-up table (LUT) approach. The LUTs were generated with and without using prior information. The effect of incorporating prior information on the retrieval accuracy was studied before and after stratifying the samples into broadleaf and conifer categories. The estimated values were evaluated using R2 and normalized root mean square error (nRMSE). Among the retrieved variables the lowest nRMSE (0.0899) was observed for LDMC. For both traits higher R2 values (0.83 for LDMC and 0.89 for SLA) were discovered in the pooled samples. The use of prior information improved accuracy of the retrieved traits. The strong correlation between the estimated traits and the NIR/SWIR region of the electromagnetic spectrum suggests that these leaf traits could be assessed at canopy level by using remotely sensed data.
NASA Astrophysics Data System (ADS)
Zhu, Timothy C.; Lu, Amy; Ong, Yi-Hong
2016-03-01
Accurate determination of in-vivo light fluence rate is critical for preclinical and clinical studies involving photodynamic therapy (PDT). This study compares the longitudinal light fluence distribution inside biological tissue in the central axis of a 1 cm diameter circular uniform light field for a range of in-vivo tissue optical properties (absorption coefficients (μa) between 0.01 and 1 cm-1 and reduced scattering coefficients (μs') between 2 and 40 cm-1). This was done using Monte-Carlo simulations for a semi-infinite turbid medium in an air-tissue interface. The end goal is to develop an analytical expression that would fit the results from the Monte Carlo simulation for both the 1 cm diameter circular beam and the broad beam. Each of these parameters is expressed as a function of tissue optical properties. These results can then be compared against the existing expressions in the literature for broad beam for analysis in both accuracy and applicable range. Using the 6-parameter model, the range and accuracy for light transport through biological tissue is improved and may be used in the future as a guide in PDT for light fluence distribution for known tissue optical properties.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Parametric motion control of robotic arms: A biologically based approach using neural networks
NASA Technical Reports Server (NTRS)
Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.
1993-01-01
A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.
Modeling Brain Dynamics in Brain Tumor Patients Using the Virtual Brain.
Aerts, Hannelore; Schirner, Michael; Jeurissen, Ben; Van Roost, Dirk; Achten, Eric; Ritter, Petra; Marinazzo, Daniele
2018-01-01
Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. To this end, noninvasive neuroimaging techniques such as functional MRI and diffusion-weighted imaging fiber tracking are currently employed. However, taking into account this information is often still insufficient, as the complex nonlinear dynamics of the brain impede straightforward prediction of functional outcome after surgical intervention. Large-scale brain network modeling carries the potential to bridge this gap by integrating neuroimaging data with biophysically based models to predict collective brain dynamics. As a first step in this direction, an appropriate computational model has to be selected, after which suitable model parameter values have to be determined. To this end, we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong-Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance.
Atmospheric stellar parameters from cross-correlation functions
NASA Astrophysics Data System (ADS)
Malavolta, L.; Lovis, C.; Pepe, F.; Sneden, C.; Udry, S.
2017-08-01
The increasing number of spectra gathered by spectroscopic sky surveys and transiting exoplanet follow-up has pushed the community to develop automated tools for atmospheric stellar parameters determination. Here we present a novel approach that allows the measurement of temperature (Teff), metallicity ([Fe/H]) and gravity (log g) within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, our technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. We use literature stellar parameters of high signal-to-noise (SNR), high-resolution HARPS spectra of FGK main-sequence stars to calibrate Teff, [Fe/H] and log g as a function of CCF parameters. Our technique is validated using low-SNR spectra obtained with the same instrument. For FGK stars we achieve a precision of σ _{{T_eff}} = 50 K, σlog g = 0.09 dex and σ _{{{[Fe/H]}}} =0.035 dex at SNR = 50, while the precision for observation with SNR ≳ 100 and the overall accuracy are constrained by the literature values used to calibrate the CCFs. Our approach can easily be extended to other instruments with similar spectral range and resolution or to other spectral range and stars other than FGK dwarfs if a large sample of reference stars is available for the calibration. Additionally, we provide the mathematical formulation to convert synthetic equivalent widths to CCF parameters as an alternative to direct calibration. We have made our tool publicly available.
Madi, Mahmoud K; Karameh, Fadi N
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850
NASA Astrophysics Data System (ADS)
Li, Y.; Rong, Z.
2017-12-01
The surface Bidirectional Reflectance Distribution Function (BRDF) is a key parameter that affects the vicarious calibration accuracy of visible channel remote sensing instrument. In the past 30 years, many studies have been made and a variety of models have been established. Among them, the Ross-li model was highly approved and widely used. Unfortunately, the model doesn't suitable for desert and Gobi quite well because of the scattering kernel it contained, needs the factors such as plant height and plant spacing. A new BRDF model for surface without vegetation, which is mainly used in remote sensing vicarious calibration, is established. That was called Equivalent Mirror Plane (EMP) BRDF. It is used to characterize the bidirectional reflectance of the near Lambertian surface. The accuracy of the EMP BRDF model is validated by the directional reflectance data measured on the Dunhuang Gobi and compared to the Ross-li model. Results show that the regression accuracy of the new model is 0.828, which is similar to the Ross-li model (0.825). Because of the simple form (contains only four polynomials) and simple principle (derived by the Fresnel reflection principle, don't include any vegetation parameters), it is more suitable for near Lambertian surface, such as Gobi, desert, Lunar and reference panel. Results also showed that the new model could also maintain a high accuracy and stability in sparse observation, which is very important for the retrieval requirements of daily updating BRDF remote sensing products.
Optical properties reconstruction using the adjoint method based on the radiative transfer equation
NASA Astrophysics Data System (ADS)
Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir
2018-01-01
An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.
NASA Astrophysics Data System (ADS)
Yashima, Kenta; Ito, Kana; Nakamura, Kazuyuki
2013-03-01
When an Infectious disease where to prevail throughout the population, epidemic parameters such as the basic reproduction ratio, initial point of infection etc. are estimated from the time series data of infected population. However, it is unclear how does the structure of host population affects this estimation accuracy. In other words, what kind of city is difficult to estimate its epidemic parameters? To answer this question, epidemic data are simulated by constructing a commuting network with different network structure and running the infection process over this network. From the given time series data for each network structure, we would like to analyzed estimation accuracy of epidemic parameters.
NASA Astrophysics Data System (ADS)
Hemmat Esfe, Mohammad; Tatar, Afshin; Ahangar, Mohammad Reza Hassani; Rostamian, Hossein
2018-02-01
Since the conventional thermal fluids such as water, oil, and ethylene glycol have poor thermal properties, the tiny solid particles are added to these fluids to increase their heat transfer improvement. As viscosity determines the rheological behavior of a fluid, studying the parameters affecting the viscosity is crucial. Since the experimental measurement of viscosity is expensive and time consuming, predicting this parameter is the apt method. In this work, three artificial intelligence methods containing Genetic Algorithm-Radial Basis Function Neural Networks (GA-RBF), Least Square Support Vector Machine (LS-SVM) and Gene Expression Programming (GEP) were applied to predict the viscosity of TiO2/SAE 50 nano-lubricant with Non-Newtonian power-law behavior using experimental data. The correlation factor (R2), Average Absolute Relative Deviation (AARD), Root Mean Square Error (RMSE), and Margin of Deviation were employed to investigate the accuracy of the proposed models. RMSE values of 0.58, 1.28, and 6.59 and R2 values of 0.99998, 0.99991, and 0.99777 reveal the accuracy of the proposed models for respective GA-RBF, CSA-LSSVM, and GEP methods. Among the developed models, the GA-RBF shows the best accuracy.
Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze
2018-04-01
Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.
NASA Astrophysics Data System (ADS)
Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.
2012-04-01
Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.
High-accuracy phase-field models for brittle fracture based on a new family of degradation functions
NASA Astrophysics Data System (ADS)
Sargado, Juan Michael; Keilegavlen, Eirik; Berre, Inga; Nordbotten, Jan Martin
2018-02-01
Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients. These in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations that enforce stress equilibrium and govern phase-field evolution. These equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic degradation function that is used most often in the literature.
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Efficient calculation of general Voigt profiles
NASA Astrophysics Data System (ADS)
Cope, D.; Khoury, R.; Lovett, R. J.
1988-02-01
An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.
NASA Technical Reports Server (NTRS)
Scoggins, J. R.; Clark, T. L.; Possiel, N. C.
1975-01-01
Procedures for forecasting clear air turbulence in the stratosphere over the western United States from rawinsonde data are described and results presented. Approaches taken to relate meteorological parameters to regions of turbulence and nonturbulence encountered by the XB-70 during 46 flights at altitudes between 12-20 km include: empirical probabilities, discriminant function analysis, and mountainwave theory. Results from these techniques were combined into a procedure to forecast regions of clear air turbulence with an accuracy of 70-80 percent. A computer program was developed to provide an objective forecast directly from the rawinsonde sounding data.
Modeling multivariate time series on manifolds with skew radial basis functions.
Jamshidi, Arta A; Kirby, Michael J
2011-01-01
We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.
GOCI image enhancement using an MTF compensation technique for coastal water applications.
Oh, Eunsong; Choi, Jong-Kuk
2014-11-03
The Geostationary Ocean Color Imager (GOCI) is the first optical sensor in geostationary orbit for monitoring the ocean environment around the Korean Peninsula. This paper discusses on-orbit modulation transfer function (MTF) estimation with the pulse-source method and its compensation results for the GOCI. Additionally, by analyzing the relationship between the MTF compensation effect and the accuracy of the secondary ocean product, we confirmed the optimal MTF compensation parameter for enhancing image quality without variation in the accuracy. In this study, MTF assessment was performed using a natural target because the GOCI system has a spatial resolution of 500 m. For MTF compensation with the Wiener filter, we fitted a point spread function with a Gaussian curve controlled by a standard deviation value (σ). After a parametric analysis for finding the optimal degradation model, the σ value of 0.4 was determined to be an optimal indicator. Finally, the MTF value was enhanced from 0.1645 to 0.2152 without degradation of the accuracy of the ocean color product. Enhanced GOCI images by MTF compensation are expected to recognize small-scale ocean products in coastal areas with sharpened geometric performance.
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
NASA Astrophysics Data System (ADS)
Hart, Vern; Burrow, Damon; Li, X. Allen
2017-08-01
A systematic method is presented for determining optimal parameters in variable-kernel deformable image registration of cone beam CT and CT images, in order to improve accuracy and convergence for potential use in online adaptive radiotherapy. Assessed conditions included the noise constant (symmetric force demons), the kernel reduction rate, the kernel reduction percentage, and the kernel adjustment criteria. Four such parameters were tested in conjunction with reductions of 5, 10, 15, 20, 30, and 40%. Noise constants ranged from 1.0 to 1.9 for pelvic images in ten prostate cancer patients. A total of 516 tests were performed and assessed using the structural similarity index. Registration accuracy was plotted as a function of iteration number and a least-squares regression line was calculated, which implied an average improvement of 0.0236% per iteration. This baseline was used to determine if a given set of parameters under- or over-performed. The most accurate parameters within this range were applied to contoured images. The mean Dice similarity coefficient was calculated for bladder, prostate, and rectum with mean values of 98.26%, 97.58%, and 96.73%, respectively; corresponding to improvements of 2.3%, 9.8%, and 1.2% over previously reported values for the same organ contours. This graphical approach to registration analysis could aid in determining optimal parameters for Demons-based algorithms. It also establishes expectation values for convergence rates and could serve as an indicator of non-physical warping, which often occurred in cases >0.6% from the regression line.
Thermodynamics and proton activities of protic ionic liquids with quantum cluster equilibrium theory
NASA Astrophysics Data System (ADS)
Ingenmey, Johannes; von Domaros, Michael; Perlt, Eva; Verevkin, Sergey P.; Kirchner, Barbara
2018-05-01
We applied the binary Quantum Cluster Equilibrium (bQCE) method to a number of alkylammonium-based protic ionic liquids in order to predict boiling points, vaporization enthalpies, and proton activities. The theory combines statistical thermodynamics of van-der-Waals-type clusters with ab initio quantum chemistry and yields the partition functions (and associated thermodynamic potentials) of binary mixtures over a wide range of thermodynamic phase points. Unlike conventional cluster approaches that are limited to the prediction of thermodynamic properties, dissociation reactions can be effortlessly included into the bQCE formalism, giving access to ionicities, as well. The method is open to quantum chemical methods at any level of theory, but combination with low-cost composite density functional theory methods and the proposed systematic approach to generate cluster sets provides a computationally inexpensive and mostly parameter-free way to predict such properties at good-to-excellent accuracy. Boiling points can be predicted within an accuracy of 50 K, reaching excellent accuracy for ethylammonium nitrate. Vaporization enthalpies are predicted within an accuracy of 20 kJ mol-1 and can be systematically interpreted on a molecular level. We present the first theoretical approach to predict proton activities in protic ionic liquids, with results fitting well into the experimentally observed correlation. Furthermore, enthalpies of vaporization were measured experimentally for some alkylammonium nitrates and an excellent linear correlation with vaporization enthalpies of their respective parent amines is observed.
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
NASA Astrophysics Data System (ADS)
Gao, Wei; Li, Xiang-ru
2017-07-01
The multi-task learning takes the multiple tasks together to make analysis and calculation, so as to dig out the correlations among them, and therefore to improve the accuracy of the analyzed results. This kind of methods have been widely applied to the machine learning, pattern recognition, computer vision, and other related fields. This paper investigates the application of multi-task learning in estimating the stellar atmospheric parameters, including the surface temperature (Teff), surface gravitational acceleration (lg g), and chemical abundance ([Fe/H]). Firstly, the spectral features of the three stellar atmospheric parameters are extracted by using the multi-task sparse group Lasso algorithm, then the support vector machine is used to estimate the atmospheric physical parameters. The proposed scheme is evaluated on both the Sloan stellar spectra and the theoretical spectra computed from the Kurucz's New Opacity Distribution Function (NEWODF) model. The mean absolute errors (MAEs) on the Sloan spectra are: 0.0064 for lg (Teff /K), 0.1622 for lg (g/(cm · s-2)), and 0.1221 dex for [Fe/H]; the MAEs on the synthetic spectra are 0.0006 for lg (Teff /K), 0.0098 for lg (g/(cm · s-2)), and 0.0082 dex for [Fe/H]. Experimental results show that the proposed scheme has a rather high accuracy for the estimation of stellar atmospheric parameters.
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacaze, Guilhem; Oefelein, Joseph
Large-eddy-simulation (LES) is quickly becoming a method of choice for studying complex thermo-physics in a wide range of propulsion and power systems. It provides a means to study coupled turbulent combustion and flow processes in parameter spaces that are unattainable using direct-numerical-simulation (DNS), with a degree of fidelity that can be far more accurate than conventional engineering methods such as the Reynolds-averaged Navier-Stokes (RANS) approx- imation. However, development of predictive LES is complicated by the complex interdependence of different type of errors coming from numerical methods, algorithms, models and boundary con- ditions. On the other hand, control of accuracy hasmore » become a critical aspect in the development of predictive LES for design. The objective of this project is to create a framework of metrics aimed at quantifying the quality and accuracy of state-of-the-art LES in a manner that addresses the myriad of competing interdependencies. In a typical simulation cycle, only 20% of the computational time is actually usable. The rest is spent in case preparation, assessment, and validation, because of the lack of guidelines. This work increases confidence in the accuracy of a given solution while min- imizing the time obtaining the solution. The approach facilitates control of the tradeoffs between cost, accuracy, and uncertainties as a function of fidelity and methods employed. The analysis is coupled with advanced Uncertainty Quantification techniques employed to estimate confidence in model predictions and calibrate model's parameters. This work has provided positive conse- quences on the accuracy of the results delivered by LES and will soon have a broad impact on research supported both by the DOE and elsewhere.« less
NASA Astrophysics Data System (ADS)
Kim, M. S.; Onda, Y.; Kim, J. K.
2015-01-01
SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Moussaoui, Ahmed; Bouziane, Touria
2016-01-01
The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).
High-Resolution Rotational Spectrum, Dunham Coefficients, and Potential Energy Function of NaCl.
Cabezas, C; Cernicharo, J; Quintana-Lacaci, G; Peña, I; Agundez, M; Prieto, L Velilla; Castro-Carrizo, A; Zuñiga, J; Bastida, A; Alonso, J L; Requena, A
2016-07-13
We report laboratory spectroscopy for the first time of the J = 1-0 and J = 2-1 lines of Na 35 Cl and Na 37 Cl in several vibrational states. The hyperfine structure has been resolved in both transitions for all vibrational levels, which permit us to predict with high accuracy the hyperfine splitting of the rotational transitions of the two isotopologues at higher frequencies. The new data have been merged with all previous works at microwave, millimeter, and infrared wavelengths and fitted to a series of mass-independent Dunham parameters and to a potential energy function. The obtained parameters have been used to compute a new dipole moment function, from which the dipole moment for infrared transitions up to Δ v = 8 has been derived. Frequency and intensity predictions are provided for all rovibrational transitions up to J = 150 and v = 8, from which the ALMA data of evolved stars can be modeled and interpreted.
Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties
NASA Astrophysics Data System (ADS)
Lazzaro, D.; Loli Piccolomini, E.; Zama, F.
2016-10-01
This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.
Information Filtering via a Scaling-Based Function
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829
Adeyekun, A A; Orji, M O
2014-04-01
To compare the predictive accuracy of foetal trans-cerebellar diameter (TCD) with those of other biometric parameters in the estimation of gestational age (GA). A cross-sectional study. The University of Benin Teaching Hospital, Nigeria. Four hundred and fifty healthy singleton pregnant women, between 14-42 weeks gestation. Trans-cerebellar diameter (TCD), biparietal diameter (BPD), femur length (FL), abdominal circumference (AC) values across the gestational age range studied. Correlation and predictive values of TCD compared to those of other biometric parameters. The range of values for TCD was 11.9 - 59.7mm (mean = 34.2 ± 14.1mm). TCD correlated more significantly with menstrual age compared with other biometric parameters (r = 0.984, p = 0.000). TCD had a higher predictive accuracy of 96.9% ± 12 days), BPD (93.8% ± 14.1 days). AC (92.7% ± 15.3 days). TCD has a stronger predictive accuracy for gestational age compared to other routinely used foetal biometric parameters among Nigerian Africans.
NASA Astrophysics Data System (ADS)
Kroonblawd, Matthew; Goldman, Nir
2017-06-01
First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for reactions that are fast relative to DFT simulation times (<10 ps), but the effects on slow reactions and the free energy surface are not well-known. We present a force matching approach to improve the chemical accuracy of DFTB. Accelerated sampling techniques are combined with path collective variables to generate the reference DFT data set and validate fitted DFTB potentials. Accuracy of force-matched DFTB free energy surfaces is assessed for slow peptide-forming reactions by direct comparison to DFT for particular paths. Extensions to model prebiotic chemistry under shock conditions are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Kroonblawd, Matthew; Goldman, Nir
First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for chemistry that is fast relative to DFT simulation times (<10 ps), but the effects on slow chemistry and the free energy surface are not well-known. We present a force matching approach to increase the accuracy of DFTB predictions for free energy surfaces. Accelerated sampling techniques are combined with path collective variables to generate the reference DFT data set and validate fitted DFTB potentials without a priori knowledge of transition states. Accuracy of force-matched DFTB free energy surfaces is assessed for slow peptide-forming reactions by direct comparison to DFT results for particular paths. Extensions to model prebiotic chemistry under shock conditions are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2005-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20%, in cases with up to 80% effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small. HSB failed in February 2005, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC in April 2005 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.
New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less
Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...
2017-10-17
New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less
Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz
NASA Astrophysics Data System (ADS)
Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao
2018-05-01
In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.
Research of three level match method about semantic web service based on ontology
NASA Astrophysics Data System (ADS)
Xiao, Jie; Cai, Fang
2011-10-01
An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.
The calculations of small molecular conformation energy differences by density functional method
NASA Astrophysics Data System (ADS)
Topol, I. A.; Burt, S. K.
1993-03-01
The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.
Desired Accuracy Estimation of Noise Function from ECG Signal by Fuzzy Approach
Vahabi, Zahra; Kermani, Saeed
2012-01-01
Unknown noise and artifacts present in medical signals with non-linear fuzzy filter will be estimated and then removed. An adaptive neuro-fuzzy interference system which has a non-linear structure presented for the noise function prediction by before Samples. This paper is about a neuro-fuzzy method to estimate unknown noise of Electrocardiogram signal. Adaptive neural combined with Fuzzy System to construct a fuzzy Predictor. For this system setting parameters such as the number of Membership Functions for each input and output, training epochs, type of MFs for each input and output, learning algorithm and etc. is determined by learning data. At the end simulated experimental results are presented for proper validation. PMID:23717810
Gutierrez-Villalobos, Jose M.; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A.; Martínez-Hernández, Moisés A.
2015-01-01
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor. PMID:26131677
Gutierrez-Villalobos, Jose M; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A; Martínez-Hernández, Moisés A
2015-06-29
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor.
RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA
Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.
2015-01-01
To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244
Bae, Youngoh; Yoo, Byeong Wook; Lee, Jung Chan; Kim, Hee Chan
2017-05-01
Detection and diagnosis based on extracting features and classification using electroencephalography (EEG) signals are being studied vigorously. A network analysis of time series EEG signal data is one of many techniques that could help study brain functions. In this study, we analyze EEG to diagnose alcoholism. We propose a novel methodology to estimate the differences in the status of the brain based on EEG data of normal subjects and data from alcoholics by computing many parameters stemming from effective network using Granger causality. Among many parameters, only ten parameters were chosen as final candidates. By the combination of ten graph-based parameters, our results demonstrate predictable differences between alcoholics and normal subjects. A support vector machine classifier with best performance had 90% accuracy with sensitivity of 95.3%, and specificity of 82.4% for differentiating between the two groups.
Small Mercury Relativity Orbiter
NASA Technical Reports Server (NTRS)
Bender, Peter L.; Vincent, Mark A.
1989-01-01
The accuracy of solar system tests of gravitational theory could be very much improved by range and Doppler measurements to a Small Mercury Relativity Orbiter. A nearly circular orbit at roughly 2400 km altitude is assumed in order to minimize problems with orbit determination and thermal radiation from the surface. The spacecraft is spin-stabilized and has a 30 cm diameter de-spun antenna. With K-band and X-band ranging systems using a 50 MHz offset sidetone at K-band, a range accuracy of 3 cm appears to be realistically achievable. The estimated spacecraft mass is 50 kg. A consider-covariance analysis was performed to determine how well the Earth-Mercury distance as a function of time could be determined with such a Relativity Orbiter. The minimum data set is assumed to be 40 independent 8-hour arcs of tracking data at selected times during a two year period. The gravity field of Mercury up through degree and order 10 is solved for, along with the initial conditions for each arc and the Earth-Mercury distance at the center of each arc. The considered parameters include the gravity field parameters of degree 11 and 12 plus the tracking station coordinates, the tropospheric delay, and two parameters in a crude radiation pressure model. The conclusion is that the Earth-Mercury distance can be determined to 6 cm accuracy or better. From a modified worst-case analysis, this would lead to roughly 2 orders of magnitude improvement in the knowledge of the precession of perihelion, the relativistic time delay, and the possible change in the gravitational constant with time.
A new CFD based non-invasive method for functional diagnosis of coronary stenosis.
Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun
2018-03-22
Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.
An Empirical Mass Function Distribution
NASA Astrophysics Data System (ADS)
Murray, S. G.; Robotham, A. S. G.; Power, C.
2018-03-01
The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.
Wang, Hubiao; Wu, Lin; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-08-10
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°-145° E, 0°-40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China's Western Pacific area is ~1.0-4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy.
Wang, Hubiao; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-01-01
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°–145° E, 0°–40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China’s Western Pacific area is ~1.0–4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy. PMID:28796158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Diwanji, T; Zhang, B
2015-06-15
Purpose: To determine the ability of pharmacokinetic parameters derived from dynamic contrast-enhanced MRI (DCE- MRI) acquired before and during concurrent chemotherapy and radiation therapy to predict clinical response in patients with head and neck cancer. Methods: Eleven patients underwent a DCE-MRI scan at three time points: 1–2 weeks before treatment, 4–5 weeks after treatment initiation, and 3–4 months after treatment completion. Post-processing of MRI data included correction to reduce motion artifacts. The arterial input function was obtained by measuring the dynamic tracer concentration in the jugular veins. The volume transfer constant (Ktrans), extracellular extravascular volume fraction (ve), rate constant (Kep;more » Kep = Ktrans/ve), and plasma volume fraction (vp) were computed for primary tumors and cervical nodal masses. Patients were categorized into two groups based on response to therapy at 3–4 months: responders (no evidence of disease) and partial responders (regression of disease). Responses of the primary tumor and nodes were evaluated separately. A linear classifier and receiver operating characteristic curve analyses were used to determine the best model for discrimination of responders from partial responders. Results: When the above pharmacokinetic parameters of the primary tumor measured before and during treatment were incorporated into the linear classifier, a discriminative accuracy of 88.9%, with sensitivity =100% and specificity = 66.7%, was observed between responders (n=6) and partial responders (n=3) for the primary tumor with the corresponding accuracy = 44.4%, sensitivity = 66.7%, and specificity of 0% for nodal masses. When only pre-treatment parameters were used, the accuracy decreased to 66.7%, with sensitivity = 66.7% and specificity = 66.7% for the primary tumor and decreased to 33.3%, sensitivity of 50%, and specificity of 0% for nodal masses. Conclusion: Higher accuracy, sensitivity, and specificity were obtained using DCE-MRI-derived pharmacokinetic parameters acquired before and during treatment as compared with those derived from the pre-treatment time-point, exclusively.« less
NASA Astrophysics Data System (ADS)
Kiamehr, Ramin
2016-04-01
One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.
NASA Astrophysics Data System (ADS)
Sridhar, Srivatsan; Maurogordato, Sophie; Benoist, Christophe; Cappi, Alberto; Marulli, Federico
2017-04-01
Context. The next generation of galaxy surveys will provide cluster catalogues probing an unprecedented range of scales, redshifts, and masses with large statistics. Their analysis should therefore enable us to probe the spatial distribution of clusters with high accuracy and derive tighter constraints on the cosmological parameters and the dark energy equation of state. However, for the majority of these surveys, redshifts of individual galaxies will be mostly estimated by multiband photometry which implies non-negligible errors in redshift resulting in potential difficulties in recovering the real-space clustering. Aims: We investigate to which accuracy it is possible to recover the real-space two-point correlation function of galaxy clusters from cluster catalogues based on photometric redshifts, and test our ability to detect and measure the redshift and mass evolution of the correlation length r0 and of the bias parameter b(M,z) as a function of the uncertainty on the cluster redshift estimate. Methods: We calculate the correlation function for cluster sub-samples covering various mass and redshift bins selected from a 500 deg2 light-cone limited to H < 24. In order to simulate the distribution of clusters in photometric redshift space, we assign to each cluster a redshift randomly extracted from a Gaussian distribution having a mean equal to the cluster cosmological redshift and a dispersion equal to σz. The dispersion is varied in the range σ(z=0)=\\frac{σz{1+z_c} = 0.005,0.010,0.030} and 0.050, in order to cover the typical values expected in forthcoming surveys. The correlation function in real-space is then computed through estimation and deprojection of wp(rp). Four mass ranges (from Mhalo > 2 × 1013h-1M⊙ to Mhalo > 2 × 1014h-1M⊙) and six redshift slices covering the redshift range [0, 2] are investigated, first using cosmological redshifts and then for the four photometric redshift configurations. Results: From the analysis of the light-cone in cosmological redshifts we find a clear increase of the correlation amplitude as a function of redshift and mass. The evolution of the derived bias parameter b(M,z) is in fair agreement with theoretical expectations. We calculate the r0-d relation up to our highest mass, highest redshift sample tested (z = 2,Mhalo > 2 × 1014h-1M⊙). From our pilot sample limited to Mhalo > 5 × 1013h-1M⊙(0.4 < z < 0.7), we find that the real-space correlation function can be recovered by deprojection of wp(rp) within an accuracy of 5% for σz = 0.001 × (1 + zc) and within 10% for σz = 0.03 × (1 + zc). For higher dispersions (besides σz > 0.05 × (1 + zc)), the recovery becomes noisy and difficult. The evolution of the correlation in redshift and mass is clearly detected for all σz tested, but requires a large binning in redshift to be detected significantly between individual redshift slices when increasing σz. The best-fit parameters (r0 and γ) as well as the bias obtained from the deprojection method for all σz are within the 1σ uncertainty of the zc sample.
High accuracy operon prediction method based on STRING database scores.
Taboada, Blanca; Verde, Cristina; Merino, Enrique
2010-07-01
We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.
Daul, Claude
2014-09-01
Despite the important growth of ab initio and computational techniques, ligand field theory in molecular science or crystal field theory in condensed matter offers the most intuitive way to calculate multiplet energy levels arising from systems with open shells d and/or f electrons. Over the past decade we have developed a ligand field treatment of inorganic molecular modelling taking advantage of the dominant localization of the frontier orbitals within the metal-sphere. This feature, which is observed in any inorganic coordination compound, especially if treated by Density Functional Theory calculation, allows the determination of the electronic structure and properties with a surprising good accuracy. In ligand field theory, the theoretical concepts consider only a single atom center; and treat its interaction with the chemical environment essentially as a perturbation. Therefore success in the simple ligand field theory is no longer questionable, while the more accurate molecular orbital theory does in general over-estimate the metal-ligand covalence, thus yields wave functions that are too delocalized. Although LF theory has always been popular as a semi-empirical method when dealing with molecules of high symmetry e.g. cubic symmetry where the number of parameters needed is reasonably small (3 or 5), this is no more the case for molecules without symmetry and involving both an open d- and f-shell (# parameters ∼90). However, the combination of LF theory and Density Functional (DF) theory that we introduced twenty years ago can easily deal with complex molecules of any symmetry with two and more open shells. The accuracy of these predictions from 1(st) principles achieves quite a high accuracy (<5%) in terms of states energies. Hence, this approach is well suited to predict the magnetic and photo-physical properties arbitrary molecules and materials prior to their synthesis, which is the ultimate goal of each computational chemist. We will illustrate the performance of LFDFT for the design of phosphors that produces light similar to our sun and predict the magnetic anisotropy energy of single ion magnets.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Cisler, Josh M.; Bush, Keith; James, G. Andrew; Smitherman, Sonet; Kilts, Clinton D.
2015-01-01
Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD. PMID:26241958
Cisler, Josh M; Bush, Keith; James, G Andrew; Smitherman, Sonet; Kilts, Clinton D
2015-01-01
Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD.
Oddone, Francesco; Lucenteforte, Ersilia; Michelessi, Manuele; Rizzo, Stanislao; Donati, Simone; Parravano, Mariacristina; Virgili, Gianni
2016-05-01
Macular parameters have been proposed as an alternative to retinal nerve fiber layer (RNFL) parameters to diagnose glaucoma. Comparing the diagnostic accuracy of macular parameters, specifically the ganglion cell complex (GCC) and ganglion cell inner plexiform layer (GCIPL), with the accuracy of RNFL parameters for detecting manifest glaucoma is important to guide clinical practice and future research. Studies using spectral domain optical coherence tomography (SD OCT) and reporting macular parameters were included if they allowed the extraction of accuracy data for diagnosing manifest glaucoma, as confirmed with automated perimetry or a clinician's optic nerve head (ONH) assessment. Cross-sectional cohort studies and case-control studies were included. The QUADAS 2 tool was used to assess methodological quality. Only direct comparisons of macular versus RNFL parameters (i.e., in the same study) were conducted. Summary sensitivity and specificity of each macular or RNFL parameter were reported, and the relative diagnostic odds ratio (DOR) was calculated in hierarchical summary receiver operating characteristic (HSROC) models to compare them. Thirty-four studies investigated macular parameters using RTVue OCT (Optovue Inc., Fremont, CA) (19 studies, 3094 subjects), Cirrus OCT (Carl Zeiss Meditec Inc., Dublin, CA) (14 studies, 2164 subjects), or 3D Topcon OCT (Topcon, Inc., Tokyo, Japan) (4 studies, 522 subjects). Thirty-two of these studies allowed comparisons between macular and RNFL parameters. Studies generally reported sensitivities at fixed specificities, more commonly 0.90 or 0.95, with sensitivities of most best-performing parameters between 0.65 and 0.75. For all OCT devices, compared with RNFL parameters, macular parameters were similarly or slightly less accurate for detecting glaucoma at the highest reported specificity, which was confirmed in analyses at the lowest specificity. Included studies suffered from limitations, especially the case-control study design, which is known to overestimate accuracy. However, this flaw is less relevant as a source of bias in direct comparisons conducted within studies. With the use of OCT, RNFL parameters are still preferable to macular parameters for diagnosing manifest glaucoma, but the differences are small. Because of high heterogeneity, direct comparative or randomized studies of OCT devices or OCT parameters and diagnostic strategies are essential. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Mair; Hussain, Arif; Malik, M. Y.; Salahuddin, T.; Khan, Farzana
This article presents the two-dimensional flow of MHD hyperbolic tangent fluid with nanoparticles towards a stretching surface. The mathematical modelling of current flow analysis yields the nonlinear set of partial differential equations which then are reduce to ordinary differential equations by using suitable scaling transforms. Then resulting equations are solved by using shooting technique. The behaviour of the involved physical parameters (Weissenberg number We , Hartmann number M , Prandtl number Pr , Brownian motion parameter Nb , Lewis number Le and thermophoresis number Nt) on velocity, temperature and concentration are interpreted in detail. Additionally, local skin friction, local Nusselt number and local Sherwood number are computed and analyzed. It has been explored that Weissenberg number and Hartmann number are decelerate fluid motion. Brownian motion and thermophoresis both enhance the fluid temperature. Local Sherwood number is increasing function whereas Nusselt number is reducing function for increasing values of Brownian motion parameter Nb , Prandtl number Pr , thermophoresis parameter Nt and Lewis number Le . Additionally, computed results are compared with existing literature to validate the accuracy of solution, one can see that present results have quite resemblance with reported data.
A lumped parameter mathematical model for simulation of subsonic wind tunnels
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Cole, G. L.; Bruton, W. M.; Szuch, J. R.
1986-01-01
Equations for a lumped parameter mathematical model of a subsonic wind tunnel circuit are presented. The equation state variables are internal energy, density, and mass flow rate. The circuit model is structured to allow for integration and analysis of tunnel subsystem models which provide functions such as control of altitude pressure and temperature. Thus the model provides a useful tool for investigating the transient behavior of the tunnel and control requirements. The model was applied to the proposed NASA Lewis Altitude Wind Tunnel (AWT) circuit and included transfer function representations of the tunnel supply/exhaust air and refrigeration subsystems. Both steady state and frequency response data are presented for the circuit model indicating the type of results and accuracy that can be expected from the model. Transient data for closed loop control of the tunnel and its subsystems are also presented, demonstrating the model's use as a control analysis tool.
Methods for the behavioral, educational, and social sciences: an R package.
Kelley, Ken
2007-11-01
Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.
Nishino, Ko; Lombardi, Stephen
2011-01-01
We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.
Accuracy Assessment of Professional Grade Unmanned Systems for High Precision Airborne Mapping
NASA Astrophysics Data System (ADS)
Mostafa, M. M. R.
2017-08-01
Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies match the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Quebec, Canada. The achievable precision of map production is addressed in some detail.
NASA Astrophysics Data System (ADS)
Mahvash Mohammadi, Neda; Hezarkhani, Ardeshir
2018-07-01
Classification of mineralised zones is an important factor for the analysis of economic deposits. In this paper, the support vector machine (SVM), a supervised learning algorithm, based on subsurface data is proposed for classification of mineralised zones in the Takht-e-Gonbad porphyry Cu-deposit (SE Iran). The effects of the input features are evaluated via calculating the accuracy rates on the SVM performance. Ultimately, the SVM model, is developed based on input features namely lithology, alteration, mineralisation, the level and, radial basis function (RBF) as a kernel function. Moreover, the optimal amount of parameters λ and C, using n-fold cross-validation method, are calculated at level 0.001 and 0.01 respectively. The accuracy of this model is 0.931 for classification of mineralised zones in the Takht-e-Gonbad porphyry deposit. The results of the study confirm the efficiency of SVM method for classification the mineralised zones.
An algorithm of improving speech emotional perception for hearing aid
NASA Astrophysics Data System (ADS)
Xi, Ji; Liang, Ruiyu; Fei, Xianju
2017-07-01
In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.
Tran, Vi Do; Dario, Paolo; Mazzoleni, Stefano
2018-03-01
This review classifies the kinematic measures used to evaluate post-stroke motor impairment following upper limb robot-assisted rehabilitation and investigates their correlations with clinical outcome measures. An online literature search was carried out in PubMed, MEDLINE, Scopus and IEEE-Xplore databases. Kinematic parameters mentioned in the studies included were categorized into the International Classification of Functioning, Disability and Health (ICF) domains. The correlations between these parameters and the clinical scales were summarized. Forty-nine kinematic parameters were identified from 67 articles involving 1750 patients. The most frequently used parameters were: movement speed, movement accuracy, peak speed, number of speed peaks, and movement distance and duration. According to the ICF domains, 44 kinematic parameters were categorized into Body Functions and Structure, 5 into Activities and no parameters were categorized into Participation and Personal and Environmental Factors. Thirteen articles investigated the correlations between kinematic parameters and clinical outcome measures. Some kinematic measures showed a significant correlation coefficient with clinical scores, but most were weak or moderate. The proposed classification of kinematic measures into ICF domains and their correlations with clinical scales could contribute to identifying the most relevant ones for an integrated assessment of upper limb robot-assisted rehabilitation treatments following stroke. Increasing the assessment frequency by means of kinematic parameters could optimize clinical assessment procedures and enhance the effectiveness of rehabilitation treatments. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.
Modelling Accuracy of a Car Steering Mechanism with Rack and Pinion and McPherson Suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-08-01
Modelling accuracy of a car steering mechanism with a rack and pinion and McPherson suspension is analyzed. Geometrical parameters of the model are described by using the coordinates of centers of spherical joints, directional unit vectors and axis points of revolute, cylindrical and prismatic joints. Modelling accuracy is assumed as the differences between the values of the wheel knuckle position and orientation coordinates obtained using a simulation model and the corresponding measured values. The sensitivity analysis of the parameters on the model accuracy is illustrated by two numerical examples.
Tuning to optimize SVM approach for assisting ovarian cancer diagnosis with photoacoustic imaging.
Wang, Rui; Li, Rui; Lei, Yanyan; Zhu, Quing
2015-01-01
Support vector machine (SVM) is one of the most effective classification methods for cancer detection. The efficiency and quality of a SVM classifier depends strongly on several important features and a set of proper parameters. Here, a series of classification analyses, with one set of photoacoustic data from ovarian tissues ex vivo and a widely used breast cancer dataset- the Wisconsin Diagnostic Breast Cancer (WDBC), revealed the different accuracy of a SVM classification in terms of the number of features used and the parameters selected. A pattern recognition system is proposed by means of SVM-Recursive Feature Elimination (RFE) with the Radial Basis Function (RBF) kernel. To improve the effectiveness and robustness of the system, an optimized tuning ensemble algorithm called as SVM-RFE(C) with correlation filter was implemented to quantify feature and parameter information based on cross validation. The proposed algorithm is first demonstrated outperforming SVM-RFE on WDBC. Then the best accuracy of 94.643% and sensitivity of 94.595% were achieved when using SVM-RFE(C) to test 57 new PAT data from 19 patients. The experiment results show that the classifier constructed with SVM-RFE(C) algorithm is able to learn additional information from new data and has significant potential in ovarian cancer diagnosis.
Viscosity Prediction for Petroleum Fluids Using Free Volume Theory and PC-SAFT
NASA Astrophysics Data System (ADS)
Khoshnamvand, Younes; Assareh, Mehdi
2018-04-01
In this study, free volume theory ( FVT) in combination with perturbed-chain statistical associating fluid theory is implemented for viscosity prediction of petroleum reservoir fluids containing ill-defined components such as cuts and plus fractions. FVT has three adjustable parameters for each component to calculate viscosity. These three parameters for petroleum cuts (especially plus fractions) are not available. In this work, these parameters are determined for different petroleum fractions. A model as a function of molecular weight and specific gravity is developed using 22 real reservoir fluid samples with API grades in the range of 22 to 45. Afterward, the proposed model accuracy in comparison with the accuracy of De la Porte et al. with reference to experimental data is presented. The presented model is used for six real samples in an evaluation step, and the results are compared with available experimental data and the method of De la Porte et al. Finally, the method of Lohrenz et al. and the method of Pedersen et al. as two common industrial methods for viscosity calculation are compared with the proposed approach. The absolute average deviation was 9.7 % for free volume theory method, 15.4 % for Lohrenz et al., and 22.16 for Pedersen et al.
A Boussinesq-scaled, pressure-Poisson water wave model
NASA Astrophysics Data System (ADS)
Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint
2015-02-01
Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.
NASA Astrophysics Data System (ADS)
Medina, H.; Romano, N.; Chirico, G. B.
2014-07-01
This study presents a dual Kalman filter (DSUKF - dual standard-unscented Kalman filter) for retrieving states and parameters controlling the soil water dynamics in a homogeneous soil column, by assimilating near-surface state observations. The DSUKF couples a standard Kalman filter for retrieving the states of a linear solver of the Richards equation, and an unscented Kalman filter for retrieving the parameters of the soil hydraulic functions, which are defined according to the van Genuchten-Mualem closed-form model. The accuracy and the computational expense of the DSUKF are compared with those of the dual ensemble Kalman filter (DEnKF) implemented with a nonlinear solver of the Richards equation. Both the DSUKF and the DEnKF are applied with two alternative state-space formulations of the Richards equation, respectively differentiated by the type of variable employed for representing the states: either the soil water content (θ) or the soil water matric pressure head (h). The comparison analyses are conducted with reference to synthetic time series of the true states, noise corrupted observations, and synthetic time series of the meteorological forcing. The performance of the retrieval algorithms are examined accounting for the effects exerted on the output by the input parameters, the observation depth and assimilation frequency, as well as by the relationship between retrieved states and assimilated variables. The uncertainty of the states retrieved with DSUKF is considerably reduced, for any initial wrong parameterization, with similar accuracy but less computational effort than the DEnKF, when this is implemented with ensembles of 25 members. For ensemble sizes of the same order of those involved in the DSUKF, the DEnKF fails to provide reliable posterior estimates of states and parameters. The retrieval performance of the soil hydraulic parameters is strongly affected by several factors, such as the initial guess of the unknown parameters, the wet or dry range of the retrieved states, the boundary conditions, as well as the form (h-based or θ-based) of the state-space formulation. Several analyses are reported to show that the identifiability of the saturated hydraulic conductivity is hindered by the strong correlation with other parameters of the soil hydraulic functions defined according to the van Genuchten-Mualem closed-form model.
Determining Kinetic Parameters for Isothermal Crystallization of Glasses
NASA Technical Reports Server (NTRS)
Ray, C. S.; Zhang, T.; Reis, S. T.; Brow, R. K.
2006-01-01
Non-isothermal crystallization techniques are frequently used to determine the kinetic parameters for crystallization in glasses. These techniques are experimentally simple and quick compared to the isothermal techniques. However, the analytical models used for non-isothermal data analysis, originally developed for describing isothermal transformation kinetics, are fundamentally flawed. The present paper describes a technique for determining the kinetic parameters for isothermal crystallization in glasses, which eliminates most of the common problems that generally make the studies of isothermal crystallization laborious and time consuming. In this technique, the volume fraction of glass that is crystallized as a function of time during an isothermal hold was determined using differential thermal analysis (DTA). The crystallization parameters for the lithium-disilicate (Li2O.2SiO2) model glass were first determined and compared to the same parameters determined by other techniques to establish the accuracy and usefulness of the present technique. This technique was then used to describe the crystallization kinetics of a complex Ca-Sr-Zn-silicate glass developed for sealing solid oxide fuel cells.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Investigation into discretization methods of the six-parameter Iwan model
NASA Astrophysics Data System (ADS)
Li, Yikun; Hao, Zhiming; Feng, Jiaquan; Zhang, Dingguo
2017-02-01
Iwan model is widely applied for the purpose of describing nonlinear mechanisms of jointed structures. In this paper, parameter identification procedures of the six-parameter Iwan model based on joint experiments with different preload techniques are performed. Four kinds of discretization methods deduced from stiffness equation of the six-parameter Iwan model are provided, which can be used to discretize the integral-form Iwan model into a sum of finite Jenkins elements. In finite element simulation, the influences of discretization methods and numbers of Jenkins elements on computing accuracy are discussed. Simulation results indicate that a higher accuracy can be obtained with larger numbers of Jenkins elements. It is also shown that compared with other three kinds of discretization methods, the geometric series discretization based on stiffness provides the highest computing accuracy.
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
NASA Astrophysics Data System (ADS)
Zhang, Zhongya; Pan, Bing; Grédiac, Michel; Song, Weidong
2018-04-01
The virtual fields method (VFM) is generally used with two-dimensional digital image correlation (2D-DIC) or grid method (GM) for identifying constitutive parameters. However, when small out-of-plane translation/rotation occurs to the test specimen, 2D-DIC and GM are prone to yield inaccurate measurements, which further lessen the accuracy of the parameter identification using VFM. In this work, an easy-to-implement but effective "special" stereo-DIC (SS-DIC) method is proposed for accuracy-enhanced VFM identification. The SS-DIC can not only deliver accurate deformation measurement without being affected by unavoidable out-of-plane movement/rotation of a test specimen, but can also ensure evenly distributed calculation data in space, which leads to simple data processing. Based on the accurate kinematics fields with evenly distributed measured points determined by SS-DIC method, constitutive parameters can be identified by VFM with enhanced accuracy. Uniaxial tensile tests of a perforated aluminum plate and pure shear tests of a prismatic aluminum specimen verified the effectiveness and accuracy of the proposed method. Experimental results show that the constitutive parameters identified by VFM using SS-DIC are more accurate and stable than those identified by VFM using 2D-DIC. It is suggested that the proposed SS-DIC can be used as a standard measuring tool for mechanical identification using VFM.
Testing General Relativity with the Radio Science Experiment of the BepiColombo mission to Mercury
NASA Astrophysics Data System (ADS)
Schettino, Giulia; Tommei, Giacomo
2016-09-01
The relativity experiment is part of the Mercury Orbiter Radio science Experiment (MORE) on-board the ESA/JAXA BepiColombo mission to Mercury. Thanks to very precise radio tracking from the Earth and accelerometer, it will be possible to perform an accurate test of General Relativity, by constraining a number of post-Newtonian and related parameters with an unprecedented level of accuracy. The Celestial Mechanics Group of the University of Pisa developed a new dedicated software, ORBIT14, to perform the simulations and to determine simultaneously all the parameters of interest within a global least squares fit. After highlighting some critical issues, we report on the results of a full set of simulations, carried out in the most up-to-date mission scenario. For each parameter we discuss the achievable accuracy, in terms of a formal analysis through the covariance matrix and, furthermore, by the introduction of an alternative, more representative, estimation of the errors. We show that, for example, an accuracy of some parts in 10^-6 for the Eddington parameter β and of 10^-5 for the Nordtvedt parameter η can be attained, while accuracies at the level of 5×10^-7 and 1×10^-7 can be achieved for the preferred frames parameters α1 and α2, respectively.
Sensitivity analysis of pulse pileup model parameter in photon counting detectors
NASA Astrophysics Data System (ADS)
Shunhavanich, Picha; Pelc, Norbert J.
2017-03-01
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sales, J. S.; Silva, L. F. da; Almeida, N. G. de
2011-03-15
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity
NASA Astrophysics Data System (ADS)
Sales, J. S.; da Silva, L. F.; de Almeida, N. G.
2011-03-01
We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.
NASA Astrophysics Data System (ADS)
Schneider, Wilfried; Bortfeld, Thomas; Schlegel, Wolfgang
2000-02-01
We describe a new method to convert CT numbers into mass density and elemental weights of tissues required as input for dose calculations with Monte Carlo codes such as EGS4. As a first step, we calculate the CT numbers for 71 human tissues. To reduce the effort for the necessary fits of the CT numbers to mass density and elemental weights, we establish four sections on the CT number scale, each confined by selected tissues. Within each section, the mass density and elemental weights of the selected tissues are interpolated. For this purpose, functional relationships between the CT number and each of the tissue parameters, valid for media which are composed of only two components in varying proportions, are derived. Compared with conventional data fits, no loss of accuracy is accepted when using the interpolation functions. Assuming plausible values for the deviations of calculated and measured CT numbers, the mass density can be determined with an accuracy better than 0.04 g cm-3 . The weights of phosphorus and calcium can be determined with maximum uncertainties of 1 or 2.3 percentage points (pp) respectively. Similar values can be achieved for hydrogen (0.8 pp) and nitrogen (3 pp). For carbon and oxygen weights, errors up to 14 pp can occur. The influence of the elemental weights on the results of Monte Carlo dose calculations is investigated and discussed.
NASA Astrophysics Data System (ADS)
Cucchetti, E.; Eckart, M. E.; Peille, P.; Porter, F. S.; Pajot, F.; Pointecouteau, E.
2018-04-01
With its array of 3840 Transition Edge Sensors (TESs), the Athena X-ray Integral Field Unit (X-IFU) will provide spatially resolved high-resolution spectroscopy (2.5 eV up to 7 keV) from 0.2 to 12 keV, with an absolute energy scale accuracy of 0.4 eV. Slight changes in the TES operating environment can cause significant variations in its energy response function, which may result in systematic errors in the absolute energy scale. We plan to monitor such changes at pixel level via onboard X-ray calibration sources and correct the energy scale accordingly using a linear or quadratic interpolation of gain curves obtained during ground calibration. However, this may not be sufficient to meet the 0.4 eV accuracy required for the X-IFU. In this contribution, we introduce a new two-parameter gain correction technique, based on both the pulse-height estimate of a fiducial line and the baseline value of the pixels. Using gain functions that simulate ground calibration data, we show that this technique can accurately correct deviations in detector gain due to changes in TES operating conditions such as heat sink temperature, bias voltage, thermal radiation loading and linear amplifier gain. We also address potential optimisations of the onboard calibration source and compare the performance of this new technique with those previously used.
Program Package for the Analysis of High Resolution High Signal-To-Noise Stellar Spectra
NASA Astrophysics Data System (ADS)
Piskunov, N.; Ryabchikova, T.; Pakhomov, Yu.; Sitnova, T.; Alekseeva, S.; Mashonkina, L.; Nordlander, T.
2017-06-01
The program package SME (Spectroscopy Made Easy), designed to perform an analysis of stellar spectra using spectral fitting techniques, was updated due to adding new functions (isotopic and hyperfine splittins) in VALD and including grids of NLTE calculations for energy levels of few chemical elements. SME allows to derive automatically stellar atmospheric parameters: effective temperature, surface gravity, chemical abundances, radial and rotational velocities, turbulent velocities, taking into account all the effects defining spectral line formation. SME package uses the best grids of stellar atmospheres that allows us to perform spectral analysis with the similar accuracy in wide range of stellar parameters and metallicities - from dwarfs to giants of BAFGK spectral classes.
Nanofluid slip flow over a stretching cylinder with Schmidt and Péclet number effects
NASA Astrophysics Data System (ADS)
Md Basir, Md Faisal; Uddin, M. J.; Md. Ismail, A. I.; Bég, O. Anwar
2016-05-01
A mathematical model is presented for three-dimensional unsteady boundary layer slip flow of Newtonian nanofluids containing gyrotactic microorganisms over a stretching cylinder. Both hydrodynamic and thermal slips are included. By applying suitable similarity transformations, the governing equations are transformed into a set of nonlinear ordinary differential equations with appropriate boundary conditions. The transformed nonlinear ordinary differential boundary value problem is then solved using the Runge-Kutta-Fehlberg fourth-fifth order numerical method in Maple 18 symbolic software. The effects of the controlling parameters on the dimensionless velocity, temperature, nanoparticle volume fractions and microorganism motile density functions have been illustrated graphically. Comparisons of the present paper with the existing published results indicate good agreement and supports the validity and the accuracy of our numerical computations. Increasing bioconvection Schmidt number is observed to depress motile micro-organism density function. Increasing thermal slip parameter leads to a decrease in temperature. Thermal slip also exerts a strong influence on nano-particle concentration. The flow is accelerated with positive unsteadiness parameter (accelerating cylinder) and temperature and micro-organism density function are also increased. However nano-particle concentration is reduced with positive unsteadiness parameter. Increasing hydrodynamic slip is observed to boost temperatures and micro-organism density whereas it decelerates the flow and reduces nano-particle concentrations. The study is relevant to nano-biopolymer manufacturing processes.
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri
Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.
NASA Astrophysics Data System (ADS)
Singh, Ram Chandra; Ram, Jokhan
2011-11-01
The effects of quadrupole moments on the isotropic-nematic (IN) phase transitions are studied using the density-functional theory (DFT) for a Gay-Berne (GB) fluid for a range of length-to-breadth parameters ? in the reduced temperature range ? . The pair-correlation functions of the isotropic phase, which enter into the DFT as input parameters are found by solving the Percus-Yevick integral equation theory. The method used involves an expansion of angle-dependent functions appearing in the integral equations in terms of spherical harmonics and the harmonic coefficients are obtained by an iterative algorithm. All the terms of harmonic coefficients which involve l indices up to less than or equal to 6 are considered. The numerical accuracy of the results depends on the number of spherical harmonic coefficients considered for each orientation-dependent function. As the length-to-breadth ratio of quadrupolar GB molecules is increased, the IN transition is seen to move to lower density (and pressure) at a given temperature. It has been observed that the DFT is good to study the IN transitions in such fluids. The theoretical results have also been compared with the computer simulation results wherever they are available.
Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots
NASA Astrophysics Data System (ADS)
WANG, Wei; WANG, Lei; YUN, Chao
2017-03-01
Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.
NASA Astrophysics Data System (ADS)
Ma, B.; Li, J.; Fan, W.; Ren, H.; Xu, X.
2017-12-01
Leaf area index (LAI) is one of the important parameters of vegetation canopy structure, which can represent the growth condition of vegetation effectively. The accuracy, availability and timeliness of LAI data can be improved greatly, which is of great importance to vegetation-related research, such as the study of atmospheric, land surface and hydrological processes to obtain LAI by remote sensing method. Heihe River Basin is the inland river basin in northwest China. There are various types of vegetation and all kinds of terrain conditions in the basin, so it is helpful for testing the accuracy of the model under the complex surface and evaluating the correctness of the model to study LAI in this area. On the other hand, located in west arid area of China, the ecological environment of Heihe Basin is fragile, LAI is an important parameter to represent the vegetation growth condition, and can help us understand the status of vegetation in the Heihe River Basin. Different from the previous LAI inversion models, the BRDF (bidirectional reflectance distribution function) unified model can be applied for both continuous vegetation and discrete vegetation, it is appropriate to the complex vegetation distribution. LAI is the key input parameter of the model. We establish the inversion algorithm that can exactly retrieve LAI using remote sensing image based on the unified model. First, we determine the vegetation type through the vegetation classification map to obtain the corresponding G function, leaf and surface reflectivity. Then, we need to determine the leaf area index (LAI), the aggregation index (ζ) and the sky scattered light ratio (β) range and the value of the interval, entering all the parameters into the model to calculate the corresponding reflectivity ρ and establish the lookup table of different vegetation. Finally, we can invert LAI on the basis of the established lookup table. The principle of inversion is least squares method. We have produced 1 km LAI products from 2000 to 2014, once every 8 days. The results show that the algorithm owns good stability and can effectively invert LAI in areas with very complex vegetation and terrain conditions.
Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters
NASA Astrophysics Data System (ADS)
Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai
2016-04-01
Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Nikpour, Ahmad
2013-09-01
In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.
State Space Modeling of Time-Varying Contemporaneous and Lagged Relations in Connectivity Maps
Molenaar, Peter C. M.; Beltz, Adriene M.; Gates, Kathleen M.; Wilson, Stephen J.
2017-01-01
Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. PMID:26546863
Tcherniavski, Iouri; Kahrizi, Mojtaba
2008-11-20
Using a gradient optimization method with objective functions formulated in terms of a signal-to-noise ratio (SNR) calculated at given values of the prescribed spatial ground resolution, optimization problems of geometrical parameters of a distributed optical system and a charge-coupled device of a space-based optical-electronic system are solved for samples of the optical systems consisting of two and three annular subapertures. The modulation transfer function (MTF) of the distributed aperture is expressed in terms of an average MTF taking residual image alignment (IA) and optical path difference (OPD) errors into account. The results show optimal solutions of the optimization problems depending on diverse variable parameters. The information on the magnitudes of the SNR can be used to determine the number of the subapertures and their sizes, while the information on the SNR decrease depending on the IA and OPD errors can be useful in design of a beam combination control system to produce the necessary requirements to its accuracy on the basis of the permissible deterioration in the image quality.
Prampolini, Giacomo; Campetella, Marco; De Mitri, Nicola; Livotto, Paolo Roberto; Cacelli, Ivo
2016-11-08
A robust and automated protocol for the derivation of sound force field parameters, suitable for condensed-phase classical simulations, is here tested and validated on several halogenated hydrocarbons, a class of compounds for which standard force fields have often been reported to deliver rather inaccurate performances. The major strength of the proposed protocol is that all of the parameters are derived only from first principles because all of the information required is retrieved from quantum mechanical data, purposely computed for the investigated molecule. This a priori parametrization is carried out separately for the intra- and intermolecular contributions to the force fields, respectively exploiting the Joyce and Picky programs, previously developed in our group. To avoid high computational costs, all quantum mechanical calculations were performed exploiting the density functional theory. Because the choice of the functional is known to be crucial for the description of the intermolecular interactions, a specific procedure is proposed, which allows for a reliable benchmark of different functionals against higher-level data. The intramolecular and intermolecular contribution are eventually joined together, and the resulting quantum mechanically derived force field is thereafter employed in lengthy molecular dynamics simulations to compute several thermodynamic properties that characterize the resulting bulk phase. The accuracy of the proposed parametrization protocol is finally validated by comparing the computed macroscopic observables with the available experimental counterparts. It is found that, on average, the proposed approach is capable of yielding a consistent description of the investigated set, often outperforming the literature standard force fields, or at least delivering results of similar accuracy.
Tong, Yingna; Liu, Xiaobin; Guan, Mingxiu; Wang, Meng; Zhang, Lufang; Dong, Dong; Niu, Ruifang; Zhang, Fei; Zhou, Yunli
2017-01-01
Background The performance of estimated glomerular filtration rate (eGFR) have been proved to vary according to the races of the target population. The eGFR equations have not been validated in the Chinese cancer population received chemotherapy. Meanwhile, serum cystatin C (CysC), urea, β2 microglobulin (β2-MG), and creatinine (SCr) were also evaluated in a cohort of Chinese cancer patients. Material/Methods A total of 1000 cancer patients undergoing combination chemotherapy and 108 healthy volunteers were included in this study, and their renal function parameters were evaluated. The eGFR values were compared with reference GFR (rGFR) according to correlation, consistency, precision, and accuracy. Receiver operating characteristic (ROC) curves were used to evaluate the discriminating ability of the GFR equations and serological indicators of renal function. Results (1) The equations contained CysC had the same varying tendency as rGFR in relation to the chemotherapeutic cycle. (2) eGFRscr+cysc and eGFRChinese scr+cysc worked better than the other equations, as indicated by a stronger correlation, less bias, improved precision, higher accuracy, and greater AUC. (3) CysC was more sensitive than the other serological indicators for identifying early renal injury. (4) Each parameter showed different characteristics in subgroups of Chinese cancer patients. Conclusions CysC was the most sensitive marker for early renal injury. Among the 8 most commonly used eGFR equations, the combination equation eGFRscr+cysc and eGFRChinese scr+cysc exhibited the best performance in the assessment of the renal function of Chinese cancer patients. PMID:28623247
Bayesian Regression of Thermodynamic Models of Redox Active Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Katherine
Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from themore » model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).« less
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A
2009-01-05
Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.
Shayegh, Farzaneh; Sadri, Saeed; Amirfattahi, Rassoul; Ansari-Asl, Karim; Bellanger, Jean-Jacques; Senhadji, Lotfi
2014-01-01
In this paper, a model-based approach is presented to quantify the effective synchrony between hippocampal areas from depth-EEG signals. This approach is based on the parameter identification procedure of a realistic Multi-Source/Multi-Channel (MSMC) hippocampal model that simulates the function of different areas of hippocampus. In the model it is supposed that the observed signals recorded using intracranial electrodes are generated by some hidden neuronal sources, according to some parameters. An algorithm is proposed to extract the intrinsic (solely relative to one hippocampal area) and extrinsic (coupling coefficients between two areas) model parameters, simultaneously, by a Maximum Likelihood (ML) method. Coupling coefficients are considered as the measure of effective synchronization. This work can be considered as an application of Dynamic Causal Modeling (DCM) that enables us to understand effective synchronization changes during transition from inter-ictal to pre -ictal state. The algorithm is first validated by using some synthetic datasets. Then by extracting the coupling coefficients of real depth-EEG signals by the proposed approach, it is observed that the coupling values show no significant difference between ictal, pre-ictal and inter-ictal states, i.e., either the increase or decrease of coupling coefficients has been observed in all states. However, taking the value of intrinsic parameters into account, pre-seizure state can be distinguished from inter-ictal state. It is claimed that seizures start to appear when there are seizure-related physiological parameters on the onset channel, and its coupling coefficient toward other channels increases simultaneously. As a result of considering both intrinsic and extrinsic parameters as the feature vector, inter-ictal, pre-ictal and ictal activities are discriminated from each other with an accuracy of 91.33% accuracy. PMID:25061815
NASA Astrophysics Data System (ADS)
Weck, Philippe F.; Kim, Eunja; Greathouse, Jeffery A.; Gordon, Margaret E.; Bryan, Charles R.
2018-04-01
Elastic and thermodynamic properties of negative thermal expansion (NTE) α -ZrW2O8 have been calculated using PBEsol and PBE exchange-correlation functionals within the framework of density functional perturbation theory (DFPT). Measured elastic constants are reproduced within ∼ 2 % with PBEsol and ∼ 6 % with PBE. The thermal evolution of the Grüneisen parameter computed within the quasi-harmonic approximation exhibits negative values below the Debye temperature, consistent with observation. The standard molar heat capacity is predicted to be CP0 = 192.2 and 193.8 J mol-1K-1 with PBEsol and PBE, respectively. These results suggest superior accuracy of DFPT/PBEsol for studying the lattice dynamics, elasticity and thermodynamics of NTE materials.
Boareto, Marcelo; Yamagishi, Michel E B; Caticha, Nestor; Leite, Vitor B P
2012-10-01
In protein databases there is a substantial number of proteins structurally determined but without function annotation. Understanding the relationship between function and structure can be useful to predict function on a large scale. We have analyzed the similarities in global physicochemical parameters for a set of enzymes which were classified according to the four Enzyme Commission (EC) hierarchical levels. Using relevance theory we introduced a distance between proteins in the space of physicochemical characteristics. This was done by minimizing a cost function of the metric tensor built to reflect the EC classification system. Using an unsupervised clustering method on a set of 1025 enzymes, we obtained no relevant clustering formation compatible with EC classification. The distance distributions between enzymes from the same EC group and from different EC groups were compared by histograms. Such analysis was also performed using sequence alignment similarity as a distance. Our results suggest that global structure parameters are not sufficient to segregate enzymes according to EC hierarchy. This indicates that features essential for function are rather local than global. Consequently, methods for predicting function based on global attributes should not obtain high accuracy in main EC classes prediction without relying on similarities between enzymes from training and validation datasets. Furthermore, these results are consistent with a substantial number of studies suggesting that function evolves fundamentally by recruitment, i.e., a same protein motif or fold can be used to perform different enzymatic functions and a few specific amino acids (AAs) are actually responsible for enzyme activity. These essential amino acids should belong to active sites and an effective method for predicting function should be able to recognize them. Copyright © 2012 Elsevier Ltd. All rights reserved.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary
1998-01-01
The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.
[Design of Portable Spirometer Based on Internet of Things of Medicine].
He, Yichen; Yang, Bo; Xiong, Shiqi; Li, Qing
2018-02-08
A kind of portable device for detecting common lung function parameters is mentioned in this paper. Using the singlechip microcomputer as the master control block to collect and process the data from high-accuracy gas pressure sensor, through the way of parametric calibration and linear interpolation to test and calculate the Forced Vital Capacity (FVC), Peak Expiratory Flow (PEF), Forced Expiratory Volume in one second (FEV1), and FEV1/FVC. Meanwhile, the detected parameters can be uploaded to the intelligent mobile terminal through the wireless transmission module. The device is able to show expiratory volume-time curve and the final parameters clearly, the error of measurement is less than 5%. In addition, that device is small and convenient, not only is good for clinical application, but also can be used for family in a house.
A Procedure for High Resolution Satellite Imagery Quality Assessment
Crespi, Mattia; De Vendictis, Laura
2009-01-01
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Suttles, J. T.
1977-01-01
One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.
The performance and relationship among range-separated schemes for density functional theory
NASA Astrophysics Data System (ADS)
Nguyen, Kiet A.; Day, Paul N.; Pachter, Ruth
2011-08-01
The performance and relationship among different range-separated (RS) hybrid functional schemes are examined using the Coulomb-attenuating method (CAM) with different values for the fractions of exact Hartree-Fock (HF) exchange (α), long-range HF (β), and a range-separation parameter (μ), where the cases of α + β = 1 and α + β = 0 were designated as CA and CA0, respectively. Attenuated PBE exchange-correlation functionals with α = 0.20 and μ = 0.20 (CA-PBE) and α = 0.25 and μ = 0.11 (CA0-PBE) are closely related to the LRC-ωPBEh and HSE functionals, respectively. Time-dependent density functional theory calculations were carried out for a number of classes of molecules with varying degrees of charge-transfer (CT) character to provide an assessment of the accuracy of excitation energies from the CA functionals and a number of other functionals with different exchange hole models. Functionals that provided reasonable estimates for local and short-range CT transitions were found to give large errors for long-range CT excitations. In contrast, functionals that afforded accurate long-range CT excitation energies significantly overestimated energies for short-range CT and local transitions. The effects of exchange hole models and parameters developed for RS functionals for CT excitations were analyzed in detail. The comparative analysis across compound classes provides a useful benchmark for CT excitations.
Improvement on Timing Accuracy of LIDAR for Remote Sensing
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.
2018-05-01
The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
NASA Astrophysics Data System (ADS)
Xie, Fei; Tang, Jinyuan; Wang, Ailun; Shuai, Cijun; Wang, Qingshan
2018-05-01
In this paper, a unified solution for vibration analysis of the functionally graded carbon nanotube reinforced composite (FG-CNTRC) cylindrical panels with general elastic supports is carried out via using the Ritz method. The excellent accuracy and reliability of the present method are compared with the results of the classical boundary cases found in the literature. New results are given for vibration characteristics of FG-CNTRC cylindrical panels with various boundary conditions. The effects of the elastic restraint parameters, thickness, subtended angle and volume fraction of carbon nanotubes on the free vibration characteristic of the cylindrical panels are also reported.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela
2016-01-01
In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.
Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds
Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang
2015-01-01
Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
Dielectric elastomer for stretchable sensors: influence of the design and material properties
NASA Astrophysics Data System (ADS)
Jean-Mistral, C.; Iglesias, S.; Pruvost, S.; Duchet-Rumeau, J.; Chesné, S.
2016-04-01
Dielectric elastomers exhibit extended capabilities as flexible sensors for the detection of load distributions, pressure or huge deformations. Tracking the human movements of the fingers or the arms could be useful for the reconstruction of sporting gesture, or to control a human-like robot. Proposing new measurements methods are addressed in a number of publications leading to improving the sensitivity and accuracy of the sensing method. Generally, the associated modelling remains simple (RC or RC transmission line). The material parameters are considered constant or having a negligible effect which can lead to serious reduction of accuracy. Comparisons between measurements and modelling require care and skill, and could be tricky. Thus, we propose here a comprehensive modelling, taking into account the influence of the material properties on the performances of the dielectric elastomer sensor (DES). Various parameters influencing the characteristics of the sensors have been identified: dielectric constant, hyper-elasticity. The variations of these parameters as a function of the strain impact the linearity and sensitivity of the sensor of few percent. The sensitivity of the DES is also evaluated changing geometrical parameters (initial thickness) and its design (rectangular and dog-bone shapes). We discuss the impact of the shape regarding stress. Finally, DES including a silicone elastomer sandwiched between two high conductive stretchable electrodes, were manufactured and investigated. Classic and reliable LCR measurements are detailed. Experimental results validate our numerical model of large strain sensor (>50%).
On the parametrization of lateral dose profiles in proton radiation therapy.
Bellinzona, V E; Ciocca, M; Embriaco, A; Fontana, A; Mairani, A; Mori, M; Parodi, K
2015-07-01
The accurate evaluation of the lateral dose profile is an important issue in the field of proton radiation therapy. The beam spread, due to Multiple Coulomb Scattering (MCS), is described by the Molière's theory. To take into account also the contribution of nuclear interactions, modern Treatment Planning Systems (TPSs) generally approximate the dose profiles by a sum of Gaussian functions. In this paper we have compared different parametrizations for the lateral dose profile of protons in water for therapeutical energies: the goal is to improve the performances of the actual treatment planning. We have simulated typical dose profiles at the CNAO (Centro Nazionale di Adroterapia Oncologica) beamline with the FLUKA code and validated them with data taken at CNAO considering different energies and depths. We then performed best fits of the lateral dose profiles for different functions using ROOT and MINUIT. The accuracy of the best fits was analyzed by evaluating the reduced χ(2), the number of free parameters of the functions and the calculation time. The best results were obtained with the triple Gaussian and double Gaussian Lorentz-Cauchy functions which have 6 parameters, but good results were also obtained with the so called Gauss-Rutherford function which has only 4 parameters. The comparison of the studied functions with accurate and validated Monte Carlo calculations and with experimental data from CNAO lead us to propose an original parametrization, the Gauss-Rutherford function, to describe the lateral dose profiles of proton beams. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Schmidt, Robert L; Factor, Rachel E; Affolter, Kajsa E; Cook, Joshua B; Hall, Brian J; Narra, Krishna K; Witt, Benjamin L; Wilson, Andrew R; Layfield, Lester J
2012-01-01
Diagnostic test accuracy (DTA) studies on fine-needle aspiration cytology (FNAC) often show considerable variability in diagnostic accuracy between study centers. Many factors affect the accuracy of FNAC. A complete description of the testing parameters would help make valid comparisons between studies and determine causes of performance variation. We investigated the manner in which test conditions are specified in FNAC DTA studies to determine which parameters are most commonly specified and the frequency with which they are specified and to see whether there is significant variability in reporting practice. We identified 17 frequently reported test parameters and found significant variation in the reporting of these test specifications across studies. On average, studies reported 5 of the 17 items that would be required to specify the test conditions completely. A more complete and standardized reporting of methods, perhaps by means of a checklist, would improve the interpretation of FNAC DTA studies.
RICO: A NEW APPROACH FOR FAST AND ACCURATE REPRESENTATION OF THE COSMOLOGICAL RECOMBINATION HISTORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fendt, W. A.; Wandelt, B. D.; Chluba, J.
2009-04-15
We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z {approx}< 900), much of the net change in the ionization fraction can be captured by loweringmore » the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in {approx}10 ms, a speedup of at least 10{sup 6} over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.« less
Johnson, M L; Halvorson, H R; Ackers, G K
1976-11-30
Resolution of the linkage functions between oxygenation and subunit association-dissociation equilibria in human hemoglobin into the constituent microscopic terms has been explored by numerical simulation and least-squares analysis. The correlation properties between parameters has been studied using several choices of parameter sets in order to optimize resolution. It is found that, with currently available levels of experimental precision and ranges of variables, neither linkage function can provide sufficient resolution of all the desired energy terms. The most difficult quantities to resolve always include the dimer-tetramer association constant for unliganded hemoglobin and the oxygen binding constants to alphabeta dimers. A feasible experimental strategy for overcoming these difficulties lies in independent determination of the dimer-tetramer association constants for unliganded and fully oxygenated hemoglobin. These constants, in combination with the median lignad concentration, provide an estimate of the energy for total oxygenation of tetramers which is essentially independent of the other constituent energies. It is shown that if these separately determinable parameters are fixed, the remaining terms may be estimated to good accuracy using data which represents either linkage function. In general it is desirable to combine information from both types of experimental quantities. A previous paper (Mills, F.C., Johnson, M.L., and Ackers, G.K. (1976), Biochemestry, 15, the preceding paper in this issue) describes the experimental implementation of this strategy.
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
Rosewater, David; Ferreira, Summer; Schoenwald, David; ...
2018-01-25
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosewater, David; Ferreira, Summer; Schoenwald, David
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
ERP-Variations on Time Scales Between Hours and Months Derived From GNSS Observations
NASA Astrophysics Data System (ADS)
Weber, R.; Englich, S.; Mendes Cerveira, P.
2007-05-01
Current observations gained by the space geodetic techniques, especially VLBI, GPS and SLR, allow for the determination of Earth Rotation Parameters (ERPs - polar motion, UT1/LOD) with unprecedented accuracy and temporal resolution. This presentation focuses on contributions to the ERP recovery provided by satellite navigation systems (primarily GPS). The IGS (International GNSS Service), for example, currently provides daily polar motion with an accuracy of less than 0.1mas and LOD estimates with an accuracy of a few microseconds. To study more rapid variations in polar motion and LOD we established in a first step a high resolution (hourly resolution) ERP-time series from GPS observation data of the IGS network covering the year 2005. The calculations were carried out by means of the Bernese GPS Software V5.0 considering observations from a subset of 113 fairly stable stations out of the IGS05 reference frame sites. From these ERP time series the amplitudes of the major diurnal and semidiurnal variations caused by ocean tides are estimated. After correcting the series for ocean tides the remaining geodetic observed excitation is compared with variations of atmospheric excitation (AAM). To study the sensitivity of the estimates with respect to the applied mapping function we applied both the widely used NMF (Niell Mapping Function) and the VMF1 (Vienna Mapping Function 1). In addition, based on computations covering two months in 2005, the potential improvement due to the use of additional GLONASS data will be discussed.
SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Iyengar, P
2016-06-15
Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Towards soil property retrieval from space: Proof of concept using in situ observations
NASA Astrophysics Data System (ADS)
Bandara, Ranmalee; Walker, Jeffrey P.; Rüdiger, Christoph
2014-05-01
Soil moisture is a key variable that controls the exchange of water and energy fluxes between the land surface and the atmosphere. However, the temporal evolution of soil moisture is neither easy to measure nor monitor at large scales because of its high spatial variability. This is mainly a result of the local variation in soil properties and vegetation cover. Thus, land surface models are normally used to predict the evolution of soil moisture and yet, despite their importance, these models are based on low-resolution soil property information or typical values. Therefore, the availability of more accurate and detailed soil parameter data than are currently available is vital, if regional or global soil moisture predictions are to be made with the accuracy required for environmental applications. The proposed solution is to estimate the soil hydraulic properties via model calibration to remotely sensed soil moisture observation, with in situ observations used as a proxy in this proof of concept study. Consequently, the feasibility is assessed, and the level of accuracy that can be expected determined, for soil hydraulic property estimation of duplex soil profiles in a semi-arid environment using near-surface soil moisture observations under naturally occurring conditions. The retrieved soil hydraulic parameters were then assessed by their reliability to predict the root zone soil moisture using the Joint UK Land Environment Simulator model. When using parameters that were retrieved using soil moisture observations, the root zone soil moisture was predicted to within an accuracy of 0.04 m3/m3, which is an improvement of ∼0.025 m3/m3 on predictions that used published values or pedo-transfer functions.
Lohmann, Philipp; Stoffels, Gabriele; Ceccon, Garry; Rapp, Marion; Sabel, Michael; Filss, Christian P; Kamp, Marcel A; Stegmayr, Carina; Neumaier, Bernd; Shah, Nadim J; Langen, Karl-Josef; Galldiks, Norbert
2017-07-01
We investigated the potential of textural feature analysis of O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic 18 F-FET PET. Tumour-to-brain ratios (TBRs) of 18 F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR mean alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR max alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR max . Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic 18 F-FET PET scans. • Textural feature analysis provides quantitative information about tumour heterogeneity • Textural features help improve discrimination between brain metastasis recurrence and radiation injury • Textural features might be helpful to further understand tumour heterogeneity • Analysis does not require a more time consuming dynamic PET acquisition.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
Feasibility study of robotic neural controllers
NASA Technical Reports Server (NTRS)
Magana, Mario E.
1990-01-01
The results are given of a feasibility study performed to establish if an artificial neural controller could be used to achieve joint space trajectory tracking of a two-link robot manipulator. The study is based on the results obtained by Hecht-Nielsen, who claims that a functional map can be implemented to a desired degree of accuracy with a three layer feedforward artificial neural network. Central to this study is the assumption that the robot model as well as its parameters values are known.
Conde-Agudelo, A; Papageorghiou, A T; Kennedy, S H; Villar, J
2013-05-01
Several biomarkers for predicting intrauterine growth restriction (IUGR) have been proposed in recent years. However, the predictive performance of these biomarkers has not been systematically evaluated. To determine the predictive accuracy of novel biomarkers for IUGR in women with singleton gestations. Electronic databases, reference list checking and conference proceedings. Observational studies that evaluated the accuracy of novel biomarkers proposed for predicting IUGR. Data were extracted on characteristics, quality and predictive accuracy from each study to construct 2×2 tables. Summary receiver operating characteristic curves, sensitivities, specificities and likelihood ratios (LRs) were generated. A total of 53 studies, including 39,974 women and evaluating 37 novel biomarkers, fulfilled the inclusion criteria. Overall, the predictive accuracy of angiogenic factors for IUGR was minimal (median pooled positive and negative LRs of 1.7, range 1.0-19.8; and 0.8, range 0.0-1.0, respectively). Two small case-control studies reported high predictive values for placental growth factor and angiopoietin-2 only when IUGR was defined as birthweight centile with clinical or pathological evidence of fetal growth restriction. Biomarkers related to endothelial function/oxidative stress, placental protein/hormone, and others such as serum levels of vitamin D, urinary albumin:creatinine ratio, thyroid function tests and metabolomic profile had low predictive accuracy. None of the novel biomarkers evaluated in this review are sufficiently accurate to recommend their use as predictors of IUGR in routine clinical practice. However, the use of biomarkers in combination with biophysical parameters and maternal characteristics could be more useful and merits further research. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
Amaral, Jorge L M; Lopes, Agnaldo J; Jansen, José M; Faria, Alvaro C D; Melo, Pedro L
2013-12-01
The purpose of this study was to develop an automatic classifier to increase the accuracy of the forced oscillation technique (FOT) for diagnosing early respiratory abnormalities in smoking patients. The data consisted of FOT parameters obtained from 56 volunteers, 28 healthy and 28 smokers with low tobacco consumption. Many supervised learning techniques were investigated, including logistic linear classifiers, k nearest neighbor (KNN), neural networks and support vector machines (SVM). To evaluate performance, the ROC curve of the most accurate parameter was established as baseline. To determine the best input features and classifier parameters, we used genetic algorithms and a 10-fold cross-validation using the average area under the ROC curve (AUC). In the first experiment, the original FOT parameters were used as input. We observed a significant improvement in accuracy (KNN=0.89 and SVM=0.87) compared with the baseline (0.77). The second experiment performed a feature selection on the original FOT parameters. This selection did not cause any significant improvement in accuracy, but it was useful in identifying more adequate FOT parameters. In the third experiment, we performed a feature selection on the cross products of the FOT parameters. This selection resulted in a further increase in AUC (KNN=SVM=0.91), which allows for high diagnostic accuracy. In conclusion, machine learning classifiers can help identify early smoking-induced respiratory alterations. The use of FOT cross products and the search for the best features and classifier parameters can markedly improve the performance of machine learning classifiers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
On a fast calculation of structure factors at a subatomic resolution.
Afonine, P V; Urzhumtsev, A
2004-01-01
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.
Energy density functional on a microscopic basis
NASA Astrophysics Data System (ADS)
Baldo, M.; Robledo, L.; Schuck, P.; Viñas, X.
2010-06-01
In recent years impressive progress has been made in the development of highly accurate energy density functionals, which allow us to treat medium-heavy nuclei. In this approach one tries to describe not only the ground state but also the first relevant excited states. In general, higher accuracy requires a larger set of parameters, which must be carefully chosen to avoid redundancy. Following this line of development, it is unavoidable that the connection of the functional with the bare nucleon-nucleon interaction becomes more and more elusive. In principle, the construction of a density functional from a density matrix expansion based on the effective nucleon-nucleon interaction is possible, and indeed the approach has been followed by few authors. However, to what extent a density functional based on such a microscopic approach can reach the accuracy of the fully phenomenological ones remains an open question. A related question is to establish which part of a functional can be actually derived by a microscopic approach and which part, in contrast, must be left as purely phenomenological. In this paper we discuss the main problems that are encountered when the microscopic approach is followed. To this purpose we will use the method we have recently introduced to illustrate the different aspects of these problems. In particular we will discuss the possible connection of the density functional with the nuclear matter equation of state and the distinct features of finite-size effect typical of nuclei.
Accuracy of the Microsoft Kinect for measuring gait parameters during treadmill walking.
Xu, Xu; McGorry, Raymond W; Chou, Li-Shan; Lin, Jia-Hua; Chang, Chien-Chi
2015-07-01
The measurement of gait parameters normally requires motion tracking systems combined with force plates, which limits the measurement to laboratory settings. In some recent studies, the possibility of using the portable, low cost, and marker-less Microsoft Kinect sensor to measure gait parameters on over-ground walking has been examined. The current study further examined the accuracy level of the Kinect sensor for assessment of various gait parameters during treadmill walking under different walking speeds. Twenty healthy participants walked on the treadmill and their full body kinematics data were measured by a Kinect sensor and a motion tracking system, concurrently. Spatiotemporal gait parameters and knee and hip joint angles were extracted from the two devices and were compared. The results showed that the accuracy levels when using the Kinect sensor varied across the gait parameters. Average heel strike frame errors were 0.18 and 0.30 frames for the right and left foot, respectively, while average toe off frame errors were -2.25 and -2.61 frames, respectively, across all participants and all walking speeds. The temporal gait parameters based purely on heel strike have less error than the temporal gait parameters based on toe off. The Kinect sensor can follow the trend of the joint trajectories for the knee and hip joints, though there was substantial error in magnitudes. The walking speed was also found to significantly affect the identified timing of toe off. The results of the study suggest that the Kinect sensor may be used as an alternative device to measure some gait parameters for treadmill walking, depending on the desired accuracy level. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Modeling the bidirectional reflectance distribution function of mixed finite plant canopies and soil
NASA Technical Reports Server (NTRS)
Schluessel, G.; Dickinson, R. E.; Privette, J. L.; Emery, W. J.; Kokaly, R.
1994-01-01
An analytical model of the bidirectional reflectance for optically semi-infinite plant canopies has been extended to describe the reflectance of finite depth canopies contributions from the underlying soil. The model depends on 10 independent parameters describing vegetation and soil optical and structural properties. The model is inverted with a nonlinear minimization routine using directional reflectance data for lawn (leaf area index (LAI) is equal to 9.9), soybeans (LAI, 2.9) and simulated reflectance data (LAI, 1.0) from a numerical bidirectional reflectance distribution function (BRDF) model (Myneni et al., 1988). While the ten-parameter model results in relatively low rms differences for the BRDF, most of the retrieved parameters exhibit poor stability. The most stable parameter was the single-scattering albedo of the vegetation. Canopy albedo could be derived with an accuracy of less than 5% relative error in the visible and less than 1% in the near-infrared. Sensitivity were performed to determine which of the 10 parameters were most important and to assess the effects of Gaussian noise on the parameter retrievals. Out of the 10 parameters, three were identified which described most of the BRDF variability. At low LAI values the most influential parameters were the single-scattering albedos (both soil and vegetation) and LAI, while at higher LAI values (greater than 2.5) these shifted to the two scattering phase function parameters for vegetation and the single-scattering albedo of the vegetation. The three-parameter model, formed by fixing the seven least significant parameters, gave higher rms values but was less sensitive to noise in the BRDF than the full ten-parameter model. A full hemispherical reflectance data set for lawn was then interpolated to yield BRDF values corresponding to advanced very high resolution radiometer (AVHRR) scan geometries collected over a period of nine days. The resulting parameters and BRDFs are similar to those for the full sampling geometry, suggesting that the limited geometry of AVHRR measurements might be used to reliably retrieve BRDF and canopy albedo with this model.
Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field
Luo, Gang; Min, Wanli
2007-01-01
Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
Increased Accuracy of Ligand Sensing by Receptor Internalization and Lateral Receptor Diffusion
NASA Astrophysics Data System (ADS)
Aquino, Gerardo; Endres, Robert
2010-03-01
Many types of cells can sense external ligand concentrations with cell-surface receptors at extremely high accuracy. Interestingly, ligand-bound receptors are often internalized, a process also known as receptor-mediated endocytosis. While internalization is involved in a vast number of important functions for the life of a cell, it was recently also suggested to increase the accuracy of sensing ligand as overcounting of the same ligand molecules is reduced. A similar role may be played by receptor diffusion om the cell membrane. Fast, lateral receptor diffusion is known to be relevant in neurotransmission initiated by release of neurotransmitter glutamate in the synaptic cleft between neurons. By binding ligand and removal by diffusion from the region of release of the neurotransmitter, diffusing receptors can be reasonably expected to reduce the local overcounting of the same ligand molecules in the region of signaling. By extending simple ligand-receptor models to out-of-equilibrium thermodynamics, we show that both receptor internalization and lateral diffusion increase the accuracy with which cells can measure ligand concentrations in the external environment. We confirm this with our model and give quantitative predictions for experimental parameters values. We give quantitative predictions, which compare favorably to experimental data of real receptors.
Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio
NASA Technical Reports Server (NTRS)
Thomas, James
2008-01-01
Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Rao, Harsha L; Addepalli, Uday K; Yadav, Ravi K; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S
2014-03-01
To evaluate the effect of scan quality on the diagnostic accuracies of optic nerve head (ONH), retinal nerve fiber layer (RNFL), and ganglion cell complex (GCC) parameters of spectral-domain optical coherence tomography (SD OCT) in glaucoma. Cross-sectional study. Two hundred fifty-two eyes of 183 control subjects (mean deviation [MD]: -1.84 dB) and 207 eyes of 159 glaucoma patients (MD: -7.31 dB) underwent ONH, RNFL, and GCC scanning with SD OCT. Scan quality of SD OCT images was based on signal strength index (SSI) values. Influence of SSI on diagnostic accuracy of SD OCT was evaluated by receiver operating characteristic (ROC) regression. Diagnostic accuracies of all SD OCT parameters were better when the SSI values were higher. This effect was statistically significant (P < .05) for ONH and RNFL but not for GCC parameters. In mild glaucoma (MD of -5 dB), area under ROC curve (AUC) for rim area, average RNFL thickness, and average GCC thickness parameters improved from 0.651, 0.678, and 0.726, respectively, at an SSI value of 30 to 0.873, 0.962, and 0.886, respectively, at an SSI of 70. AUCs of the same parameters in advanced glaucoma (MD of -15 dB) improved from 0.747, 0.890, and 0.873, respectively, at an SSI value of 30 to 0.922, 0.994, and 0.959, respectively, at an SSI of 70. Diagnostic accuracies of SD OCT parameters in glaucoma were significantly influenced by the scan quality even when the SSI values were within the manufacturer-recommended limits. These results should be considered while interpreting the SD OCT scans for glaucoma. Copyright © 2014 Elsevier Inc. All rights reserved.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Movement Pattern and Parameter Learning in Children: Effects of Feedback Frequency
ERIC Educational Resources Information Center
Goh, Hui-Ting; Kantak, Shailesh S.; Sullivan, Katherine J.
2012-01-01
Reduced feedback during practice has been shown to be detrimental to movement accuracy in children but not in young adults. We hypothesized that the reduced accuracy is attributable to reduced movement parameter learning, but not pattern learning, in children. A rapid arm movement task that required the acquisition of a motor pattern scaled to…
Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan; Deng, Weiling
2010-01-01
To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…
NASA Astrophysics Data System (ADS)
Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.
2015-12-01
In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.
Acceleration Disturbances onboard of Geodetic Precision Space Laboratories
NASA Astrophysics Data System (ADS)
Peterseim, Nadja; Jakob, Flury; Schlicht, Anja
Bartlomiej Oszczak, b@dgps.pl University of Warmia and Mazury in Olsztyn, Poland, Olsztyn, Poland Olga Maciejczyk, omaciejczyk@gmail.com Poland In this paper there is presented the study on the parameters of the ASG-EUPOS real-time RTK service NAWGEO such as: accuracy, availability, integrity and continuity. Author's model is used for tests. These parameters enable determination of the quality of received information and practical applications of the service. Paper includes also the subject related to the NAWGEO service and algorithms used in determination of mentioned parameters. The results of accuracy and precision analyses and study on availability demonstrated that NAWGEO service enables a user a position determination with a few centimeters accuracy with high probability in any moment of time.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
NASA Astrophysics Data System (ADS)
Khurshudyan, M.; Mazhari, N. S.; Momeni, D.; Myrzakulov, R.; Raza, M.
2015-02-01
The subject of this paper is to investigate the weak regime covariant scalar-tensor-vector gravity (STVG) theory, known as the MOdified gravity (MOG) theory of gravity. First, we show that the MOG in the absence of scalar fields is converted into Λ( t), G( t) models. Time evolution of the cosmological parameters for a family of viable models have been investigated. Numerical results with the cosmological data have been adjusted. We've introduced a model for dark energy (DE) density and cosmological constant which involves first order derivatives of Hubble parameter. To extend this model, correction terms including the gravitational constant are added. In our scenario, the cosmological constant is a function of time. To complete the model, interaction terms between dark energy and dark matter (DM) manually entered in phenomenological form. Instead of using the dust model for DM, we have proposed DM equivalent to a barotropic fluid. Time evolution of DM is a function of other cosmological parameters. Using sophisticated algorithms, the behavior of various quantities including the densities, Hubble parameter, etc. have been investigated graphically. The statefinder parameters have been used for the classification of DE models. Consistency of the numerical results with experimental data of S n e I a + B A O + C M B are studied by numerical analysis with high accuracy.
Gender-Related Differences in Pelvic Morphometrics of the Retriever Dog Breed.
Nganvongpanit, K; Pitakarnnop, T; Buddhachat, K; Phatsara, M
2017-02-01
This study presents the results from a morphometric analysis of 52 dry Retriever dog pelvic bones (30 male, 22 female). A total of 20 parameters were measured using an osteometric board and digital vernier caliper. Six parameters were found to be significantly higher (P < 0.05) in males than in females, while one parameter was significantly higher (P < 0.05) in females than in males. However, none of the measured parameters demonstrated clear cut-off values with no intersect between males and females. Therefore, we generated a stepwise discriminant analysis from all 20 parameters in order to develop a possible working equation to discriminate gender from a dog pelvic bone. Stepwise discriminant analysis was used to create a discrimination function: Y = [82.1*PS/AII] - [50.72*LIS/LI] - [23.09*OTD/SP] + [7.69*SP/IE] + [6.52*IC/OW] + [7.67*ISA/OW] + [20.77*AII/PS] + [504.71*OW/ISA] - [90.84*PS/ISA] - [148.95], which showed an accuracy rate of 86.27%. This is the first study presenting an equation/function for use in discriminating gender from a dog's pelvic measurements. The results can be used in veterinary forensic anthropology and also show that a dog's pelvis presents sexual dimorphisms, as in humans. © 2016 Blackwell Verlag GmbH.
Wu, Lingtao; Lord, Dominique
2017-05-01
This study further examined the use of regression models for developing crash modification factors (CMFs), specifically focusing on the misspecification in the link function. The primary objectives were to validate the accuracy of CMFs derived from the commonly used regression models (i.e., generalized linear models or GLMs with additive linear link functions) when some of the variables have nonlinear relationships and quantify the amount of bias as a function of the nonlinearity. Using the concept of artificial realistic data, various linear and nonlinear crash modification functions (CM-Functions) were assumed for three variables. Crash counts were randomly generated based on these CM-Functions. CMFs were then derived from regression models for three different scenarios. The results were compared with the assumed true values. The main findings are summarized as follows: (1) when some variables have nonlinear relationships with crash risk, the CMFs for these variables derived from the commonly used GLMs are all biased, especially around areas away from the baseline conditions (e.g., boundary areas); (2) with the increase in nonlinearity (i.e., nonlinear relationship becomes stronger), the bias becomes more significant; (3) the quality of CMFs for other variables having linear relationships can be influenced when mixed with those having nonlinear relationships, but the accuracy may still be acceptable; and (4) the misuse of the link function for one or more variables can also lead to biased estimates for other parameters. This study raised the importance of the link function when using regression models for developing CMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accurate formula for gaseous transmittance in the infrared.
Gibson, G A; Pierluissi, J H
1971-07-01
By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.
Unified analytic representation of physical sputtering yield
NASA Astrophysics Data System (ADS)
Janev, R. K.; Ralchenko, Yu. V.; Kenmotsu, T.; Hosaka, K.
2001-03-01
Generalized energy parameter η= η( ɛ, δ) and normalized sputtering yield Ỹ(η) , where ɛ= E/ ETF and δ= Eth/ ETF, are introduced to achieve a unified representation of all available experimental and sputtering data at normal ion incidence. The sputtering data in the new Ỹ(η) representation retain their original uncertainties. The Ỹ(η) data can be fitted to a simple three-parameter analytic expression with an rms deviation of 32%, well within the uncertainties of original data. Both η and Ỹ(η) have correct physical behavior in the threshold and high-energy regions. The available theoretical data produced by the TRIM.SP code can also be represented by the same single analytic function Ỹ(η) with a similar accuracy.
NASA Astrophysics Data System (ADS)
Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.
2017-07-01
An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.
Longevity and aging. Mechanisms and perspectives.
Labat-Robert, J; Robert, L
2015-12-01
Longevity can mostly be determined with relative accuracy from birth and death registers when available. Aging is a multifactorial process, much more difficult to quantitate. Every measurable physiological function declines with specific speeds over a wide range. The mechanisms involved are also different, genetic factors are of importance for longevity determinations. The best-known genes involved are the Sirtuins, active at the genetic and epigenetic level. Aging is multifactorial, not "coded" in the genome. There are, however, a number of well-studied physical and biological parameters involved in aging, which can be determined and quantitated. We shall try to identify parameters affecting longevity as well as aging and suggest some reasonable predictions for the future. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.
2017-02-01
In this paper, the boundary layer flow and heat transfer of unsteady flow over a porous accelerating stretching surface in the presence of the velocity slip and temperature jump effects are investigated numerically. A new effective collocation method based on rational Bernstein functions is applied to solve the governing system of nonlinear ordinary differential equations. This method solves the problem on the semi-infinite domain without truncating or transforming it to a finite domain. In addition, the presented method reduces the solution of the problem to the solution of a system of algebraic equations. Graphical and tabular results are presented to investigate the influence of the unsteadiness parameter A , Prandtl number Pr, suction parameter fw, velocity slip parameter γ and thermal slip parameter φ on the velocity and temperature profiles of the fluid. The numerical experiments are reported to show the accuracy and efficiency of the novel proposed computational procedure. Comparisons of present results are made with those obtained by previous works and show excellent agreement.
Discrimination of serum Raman spectroscopy between normal and colorectal cancer
NASA Astrophysics Data System (ADS)
Li, Xiaozhou; Yang, Tianyue; Yu, Ting; Li, Siqi
2011-07-01
Raman spectroscopy of tissues has been widely studied for the diagnosis of various cancers, but biofluids were seldom used as the analyte because of the low concentration. Herein, serum of 30 normal people, 46 colon cancer, and 44 rectum cancer patients were measured Raman spectra and analyzed. The information of Raman peaks (intensity and width) and that of the fluorescence background (baseline function coefficients) were selected as parameters for statistical analysis. Principal component regression (PCR) and partial least square regression (PLSR) were used on the selected parameters separately to see the performance of the parameters. PCR performed better than PLSR in our spectral data. Then linear discriminant analysis (LDA) was used on the principal components (PCs) of the two regression method on the selected parameters, and a diagnostic accuracy of 88% and 83% were obtained. The conclusion is that the selected features can maintain the information of original spectra well and Raman spectroscopy of serum has the potential for the diagnosis of colorectal cancer.
Improved Peptide and Protein Torsional Energetics with the OPLSAA Force Field.
Robertson, Michael J; Tirado-Rives, Julian; Jorgensen, William L
2015-07-14
The development and validation of new peptide dihedral parameters are reported for the OPLS-AA force field. High accuracy quantum chemical methods were used to scan φ, ψ, χ1, and χ2 potential energy surfaces for blocked dipeptides. New Fourier coefficients for the dihedral angle terms of the OPLS-AA force field were fit to these surfaces, utilizing a Boltzmann-weighted error function and systematically examining the effects of weighting temperature. To prevent overfitting to the available data, a minimal number of new residue-specific and peptide-specific torsion terms were developed. Extensive experimental solution-phase and quantum chemical gas-phase benchmarks were used to assess the quality of the new parameters, named OPLS-AA/M, demonstrating significant improvement over previous OPLS-AA force fields. A Boltzmann weighting temperature of 2000 K was determined to be optimal for fitting the new Fourier coefficients for dihedral angle parameters. Conclusions are drawn from the results for best practices for developing new torsion parameters for protein force fields.
Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien
2013-01-01
An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.
Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien
2013-01-01
Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weck, Philippe F.; Kim, Eunja; Greathouse, Jeffery A.
Elastic and thermodynamic properties of negative thermal expansion (NTE) αα-ZrW2O8 have been calculated using PBEsol and PBE exchange-correlation functionals within the framework of density functional perturbation theory (DFPT). Measured elastic constants are reproduced within ~2% with PBEsol and 6% with PBE. The thermal evolution of the Grüneisen parameter computed within the quasi-harmonic approximation exhibits negative values below the Debye temperature, consistent with observation. The standard molar heat capacity is predicted to be Cmore » $$O\\atop{P}$$=192.2 and 193.8 J mol -1K -1 with PBEsol and PBE, respectively. These results suggest superior accuracy of DFPT/PBEsol for studying the lattice dynamics, elasticity and thermodynamics of NTE materials.« less
Weck, Philippe F.; Kim, Eunja; Greathouse, Jeffery A.; ...
2018-03-15
Elastic and thermodynamic properties of negative thermal expansion (NTE) αα-ZrW2O8 have been calculated using PBEsol and PBE exchange-correlation functionals within the framework of density functional perturbation theory (DFPT). Measured elastic constants are reproduced within ~2% with PBEsol and 6% with PBE. The thermal evolution of the Grüneisen parameter computed within the quasi-harmonic approximation exhibits negative values below the Debye temperature, consistent with observation. The standard molar heat capacity is predicted to be Cmore » $$O\\atop{P}$$=192.2 and 193.8 J mol -1K -1 with PBEsol and PBE, respectively. These results suggest superior accuracy of DFPT/PBEsol for studying the lattice dynamics, elasticity and thermodynamics of NTE materials.« less
Zimmermann, Johannes; Wright, Aidan G C
2017-01-01
The interpersonal circumplex is a well-established structural model that organizes interpersonal functioning within the two-dimensional space marked by dominance and affiliation. The structural summary method (SSM) was developed to evaluate the interpersonal nature of other constructs and measures outside the interpersonal circumplex. To date, this method has been primarily descriptive, providing no way to draw inferences when comparing SSM parameters across constructs or groups. We describe a newly developed resampling-based method for deriving confidence intervals, which allows for SSM parameter comparisons. In a series of five studies, we evaluated the accuracy of the approach across a wide range of possible sample sizes and parameter values, and demonstrated its utility for posing theoretical questions on the interpersonal nature of relevant constructs (e.g., personality disorders) using real-world data. As a result, the SSM is strengthened for its intended purpose of construct evaluation and theory building. © The Author(s) 2015.
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
NASA Technical Reports Server (NTRS)
Demarest, H. H., Jr.
1972-01-01
The elastic constants and the entire frequency spectrum were calculated up to high pressure for the alkali halides in the NaCl lattice, based on an assumed functional form of the inter-atomic potential. The quasiharmonic approximation is used to calculate the vibrational contribution to the pressure and the elastic constants at arbitrary temperature. By explicitly accounting for the effect of thermal and zero point motion, the adjustable parameters in the potential are determined to a high degree of accuracy from the elastic constants and their pressure derivatives measured at zero pressure. The calculated Gruneisen parameter, the elastic constants and their pressure derivatives are in good agreement with experimental results up to about 600 K. The model predicts that for some alkali halides the Grunesen parameter may decrease monotonically with pressure, while for others it may increase with pressure, after an initial decrease.
NASA Astrophysics Data System (ADS)
Mohan, N. S.; Kulkarni, S. M.
2018-01-01
Polymer based composites have marked their valuable presence in the area of aerospace, defense and automotive industry. Components made of composite, are assembled to main structure by fastener, which require accurate, precise high quality holes to be drilled. Drilling the hole in composite with accuracy require control over various processes parameters viz., speed, feed, drill bit size and thickens of specimen. TRIAC VMC machining center is used to drill the hole and to relate the cutting and machining parameters on the torque. MINITAB 14 software is used to analyze the collected data. As a function of cutting and specimen parameters this method could be useful for predicting torque parameters. The purpose of this work is to investigate the effect of drilling parameters to get low torque value. Results show that thickness of specimen and drill bit size are significant parameters influencing the torque and spindle speed and feed rate have least influence and overlaid plot indicates a feasible and low region of torque is observed for medium to large sized drill bits for the range of spindle speed selected. Response surface contour plots indicate the sensitivity of the drill size and specimen thickness to the torque.
Weck, Philippe F.; Kim, Eunja
2016-09-12
The structure–property relationships of bulk CeO 2 and Ce 2O 3 have been investigated using AM05 and PBEsol exchange–correlation functionals within the frameworks of Hubbard-corrected density functional theory (DFT+ U) and density functional perturbation theory (DFPT+ U). Compared with conventional PBE+ U, RPBE+ U, PW91+ U and LDA+ U functionals, AM05+ U and PBEsol+ U describe experimental crystalline parameters and properties of CeO 2 and Ce 2O 3 with superior accuracy, especially when + U is chosen close to its value derived by the linear-response approach. Lastly, the present findings call for a reexamination of some of the problematic oxidemore » materials featuring strong f- and d-electron correlation using AM05+ U and PBEsol+ U.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weck, Philippe F.; Kim, Eunja
The structure–property relationships of bulk CeO 2 and Ce 2O 3 have been investigated using AM05 and PBEsol exchange–correlation functionals within the frameworks of Hubbard-corrected density functional theory (DFT+ U) and density functional perturbation theory (DFPT+ U). Compared with conventional PBE+ U, RPBE+ U, PW91+ U and LDA+ U functionals, AM05+ U and PBEsol+ U describe experimental crystalline parameters and properties of CeO 2 and Ce 2O 3 with superior accuracy, especially when + U is chosen close to its value derived by the linear-response approach. Lastly, the present findings call for a reexamination of some of the problematic oxidemore » materials featuring strong f- and d-electron correlation using AM05+ U and PBEsol+ U.« less
Estimating Soil Moisture Using Polsar Data: a Machine Learning Approach
NASA Astrophysics Data System (ADS)
Khedri, E.; Hasanlou, M.; Tabatabaeenejad, A.
2017-09-01
Soil moisture is an important parameter that affects several environmental processes. This parameter has many important functions in numerous sciences including agriculture, hydrology, aerology, flood prediction, and drought occurrence. However, field procedures for moisture calculations are not feasible in a vast agricultural region territory. This is due to the difficulty in calculating soil moisture in vast territories and high-cost nature as well as spatial and local variability of soil moisture. Polarimetric synthetic aperture radar (PolSAR) imaging is a powerful tool for estimating soil moisture. These images provide a wide field of view and high spatial resolution. For estimating soil moisture, in this study, a model of support vector regression (SVR) is proposed based on obtained data from AIRSAR in 2003 in C, L, and P channels. In this endeavor, sequential forward selection (SFS) and sequential backward selection (SBS) are evaluated to select suitable features of polarized image dataset for high efficient modeling. We compare the obtained data with in-situ data. Output results show that the SBS-SVR method results in higher modeling accuracy compared to SFS-SVR model. Statistical parameters obtained from this method show an R2 of 97% and an RMSE of lower than 0.00041 (m3/m3) for P, L, and C channels, which has provided better accuracy compared to other feature selection algorithms.
Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration
Deng, Mingjun; Li, Jiansong
2017-01-01
The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
NASA Astrophysics Data System (ADS)
Tang, Wenjun; Qin, Jun; Yang, Kun; Liu, Shaomin; Lu, Ning; Niu, Xiaolei
2016-03-01
Cloud parameters (cloud mask, effective particle radius, and liquid/ice water path) are the important inputs in estimating surface solar radiation (SSR). These parameters can be derived from MODIS with high accuracy, but their temporal resolution is too low to obtain high-temporal-resolution SSR retrievals. In order to obtain hourly cloud parameters, an artificial neural network (ANN) is applied in this study to directly construct a functional relationship between MODIS cloud products and Multifunctional Transport Satellite (MTSAT) geostationary satellite signals. In addition, an efficient parameterization model for SSR retrieval is introduced and, when driven with MODIS atmospheric and land products, its root mean square error (RMSE) is about 100 W m-2 for 44 Baseline Surface Radiation Network (BSRN) stations. Once the estimated cloud parameters and other information (such as aerosol, precipitable water, ozone) are input to the model, we can derive SSR at high spatiotemporal resolution. The retrieved SSR is first evaluated against hourly radiation data at three experimental stations in the Haihe River basin of China. The mean bias error (MBE) and RMSE in hourly SSR estimate are 12.0 W m-2 (or 3.5 %) and 98.5 W m-2 (or 28.9 %), respectively. The retrieved SSR is also evaluated against daily radiation data at 90 China Meteorological Administration (CMA) stations. The MBEs are 9.8 W m-2 (or 5.4 %); the RMSEs in daily and monthly mean SSR estimates are 34.2 W m-2 (or 19.1 %) and 22.1 W m-2 (or 12.3 %), respectively. The accuracy is comparable to or even higher than two other radiation products (GLASS and ISCCP-FD), and the present method is more computationally efficient and can produce hourly SSR data at a spatial resolution of 5 km.
Constraints on the Energy Density Content of the Universe Using Only Clusters of Galaxies
NASA Technical Reports Server (NTRS)
Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark
2003-01-01
We demonstrate that it is possible to constrain the energy content of the Universe with high accuracy using observations of clusters of galaxies only. The degeneracies in the cosmological parameters are lifted by combining constraints from different observables of galaxy clusters. We show that constraints on cosmological parameters from galaxy cluster number counts as a function of redshift and accurate angular diameter distance measurements to clusters are complementary to each other and their combination can constrain the energy density content of the Universe well. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Zeldovich effect) surveys, the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect (X-SZ method). In this letter we combine constraints from simulated cluster number counts expected from a 12 deg2 SZ cluster survey and constraints from simulated angular diameter distance measurements based on using the X-SZ method assuming an expected accuracy of 7% in the angular diameter distance determination of 70 clusters with redshifts less than 1.5. We find that R, can be determined within about 25%, A within 20%, and w within 16%. Any cluster survey can be used to select clusters for high accuracy distance measurements, but we assumed accurate angular diameter distance measurements for only 70 clusters since long observations are necessary to achieve high accuracy in distance measurements. Thus the question naturally arises: How to select clusters of galaxies for accurate diameter distance determinations? In this letter, as an example, we demonstrate that it is possible to optimize this selection changing the number of clusters observed, and the upper cut off of their redshift range. We show that constraints on cosmological parameters from combining cluster number counts and angular diameter distance measurements, as opposed to general expectations, will not improve substantially selecting clusters with redshifts higher than one. This important conclusion allow us to restrict our cluster sample to clusters closer than one, in a range where the observational time for accurate distance measurements are more manageable. Subject headings: cosmological parameters - cosmology: theory - galaxies: clusters: general - X-rays: galaxies: clusters
NASA Astrophysics Data System (ADS)
Kong, Jian; Yao, Yibin; Liu, Lei; Zhai, Changzhi; Wang, Zemin
2016-08-01
A new algorithm for ionosphere tomography using the mapping function is proposed in this paper. First, the new solution splits the integration process into four layers along the observation ray, and then, the single-layer model (SLM) is applied to each integration part using a mapping function. Next, the model parameters are estimated layer by layer with the Kalman filtering method by introducing the scale factor (SF) γ to solve the ill-posed problem. Finally, the inversed images of different layers are combined into the final CIT image. We utilized simulated data from 23 IGS GPS stations around Europe to verify the estimation accuracy of the new algorithm; the results show that the new CIT model has better accuracy than the SLM in dense data areas and the CIT residuals are more closely grouped. The stability of the new algorithm is discussed by analyzing model accuracy under different error levels (the max errors are 5TECU, 10TECU, 15TECU, respectively). In addition, the key preset parameter, SFγ , which is given by the International Reference Ionosphere model (IRI2012). The experiment is designed to test the sensitivity of the new algorithm to SF variations. The results show that the IRI2012 is capable of providing initial SF values. Also in this paper, the seismic-ionosphere disturbance (SID) of the 2011 Japan earthquake is studied using the new CIT algorithm. Combined with the TEC time sequence of Sat.15, we find that the SID occurrence time and reaction area are highly related to the main shock time and epicenter. According to CIT images, there is a clear vertical electron density upward movement (from the 150-km layer to the 450-km layer) during this SID event; however, the peak value areas in the different layers were different, which means that the horizontal movement velocity is not consistent among the layers. The potential physical triggering mechanism is also discussed in this paper. Compared with the SLM, the RMS of the new CIT model is improved by 16.78%, while the CIT model could provide the three-dimensional variation in the ionosphere.
Production tolerance of additive manufactured polymeric objects for clinical applications.
Braian, Michael; Jimbo, Ryo; Wennerberg, Ann
2016-07-01
To determine the production tolerance of four commercially available additive manufacturing systems. By reverse engineering annex A and B from the ISO_12836;2012, two geometrical figures relevant to dentistry was obtained. Object A specifies the measurement of an inlay-shaped object and B a multi-unit specimen to simulate a four-unit bridge model. The objects were divided into x, y and z measurements, object A was divided into a total of 16 parameters and object B was tested for 12 parameters. The objects were designed digitally and manufactured by professionals in four different additive manufacturing systems; each system produced 10 samples of each objects. For object A, three manufacturers presented an accuracy of <100μm and one system showed an accuracy of <20μm. For object B, all systems presented an accuracy of <100μm, and most parameters were <40μm. The standard deviation for most parameters were <40μm. The growing interest and use of intra-oral digitizing systems stresses the use of computer aided manufacturing of working models. The additive manufacturing techniques has the potential to help us in the digital workflow. Thus, it is important to have knowledge about production accuracy and tolerances. This study presents a method to test additive manufacturing units for accuracy and repeatability. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Improved nine-node shell element MITC9i with reduced distortion sensitivity
NASA Astrophysics Data System (ADS)
Wisniewski, K.; Turska, E.
2017-11-01
The 9-node quadrilateral shell element MITC9i is developed for the Reissner-Mindlin shell kinematics, the extended potential energy and Green strain. The following features of its formulation ensure an improved behavior: 1. The MITC technique is used to avoid locking, and we propose improved transformations for bending and transverse shear strains, which render that all patch tests are passed for the regular mesh, i.e. with straight element sides and middle positions of midside nodes and a central node. 2. To reduce shape distortion effects, the so-called corrected shape functions of Celia and Gray (Int J Numer Meth Eng 20:1447-1459, 1984) are extended to shells and used instead of the standard ones. In effect, all patch tests are passed additionally for shifts of the midside nodes along straight element sides and for arbitrary shifts of the central node. 3. Several extensions of the corrected shape functions are proposed to enable computations of non-flat shells. In particular, a criterion is put forward to determine the shift parameters associated with the central node for non-flat elements. Additionally, the method is presented to construct a parabolic side for a shifted midside node, which improves accuracy for symmetric curved edges. Drilling rotations are included by using the drilling Rotation Constraint equation, in a way consistent with the additive/multiplicative rotation update scheme for large rotations. We show that the corrected shape functions reduce the sensitivity of the solution to the regularization parameter γ of the penalty method for this constraint. The MITC9i shell element is subjected to a range of linear and non-linear tests to show passing the patch tests, the absence of locking, very good accuracy and insensitivity to node shifts. It favorably compares to several other tested 9-node elements.
Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase
NASA Technical Reports Server (NTRS)
Born, G. J.; Kai, T.
1975-01-01
A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.
Power spectrum precision for redshift space distortions
NASA Astrophysics Data System (ADS)
Linder, Eric V.; Samsing, Johan
2013-02-01
Redshift space distortions in galaxy clustering offer a promising technique for probing the growth rate of structure and testing dark energy properties and gravity. We consider the issue of to what accuracy they need to be modeled in order not to unduly bias cosmological conclusions. Fitting for nonlinear and redshift space corrections to the linear theory real space density power spectrum in bins in wavemode, we analyze both the effect of marginalizing over these corrections and of the bias due to not correcting them fully. While naively subpercent accuracy is required to avoid bias in the unmarginalized case, in the fitting approach the Kwan-Lewis-Linder reconstruction function for redshift space distortions is found to be accurately selfcalibrated with little degradation in dark energy and gravity parameter estimation for a next generation galaxy redshift survey such as BigBOSS.
Radiosonde pressure sensor performance - Evaluation using tracking radars
NASA Technical Reports Server (NTRS)
Parsons, C. L.; Norcross, G. A.; Brooks, R. L.
1984-01-01
The standard balloon-borne radiosonde employed for synoptic meteorology provides vertical profiles of temperature, pressure, and humidity as a function of elapsed time. These parameters are used in the hypsometric equation to calculate the geopotential altitude at each sampling point during the balloon's flight. It is important that the vertical location information be accurate. The present investigation was conducted with the objective to evaluate the altitude determination accuracy of the standard radiosonde throughout the entire balloon profile. The tests included two other commercially available pressure sensors to see if they could provide improved accuracy in the stratosphere. The pressure-measuring performance of standard baroswitches, premium baroswitches, and hypsometers in balloon-borne sondes was correlated with tracking radars. It was found that the standard and premium baroswitches perform well up to about 25 km altitude, while hypsometers provide more reliable data above 25 km.
Ultra-fast HPM detectors improve NAD(P)H FLIM
NASA Astrophysics Data System (ADS)
Becker, Wolfgang; Wetzker, Cornelia; Benda, Aleš
2018-02-01
Metabolic imaging by NAD(P)H FLIM requires the decay functions in the individual pixels to be resolved into the decay components of bound and unbound NAD(P)H. Metabolic information is contained in the lifetime and relative amplitudes of the components. The separation of the decay components and the accuracy of the amplitudes and lifetimes improves substantially by using ultra-fast HPM-100-06 and HPM-100-07 hybrid detectors. The IRF width in combination with the Becker & Hickl SPC-150N and SPC-150NX TCSPC modules is less than 20 ps. An IRF this fast does not interfere with the fluorescence decay. The usual deconvolution process in the data analysis then virtually becomes a simple curve fitting, and the parameters of the NAD(P)H decay components are obtained at unprecedented accuracy.
An evaluation of the accuracy of some radar wind profiling techniques
NASA Technical Reports Server (NTRS)
Koscielny, A. J.; Doviak, R. J.
1983-01-01
Major advances in Doppler radar measurement in optically clear air have made it feasible to monitor radial velocities in the troposphere and lower stratosphere. For most applications the three dimensional wind vector is monitored rather than the radial velocity. Measurement of the wind vector with a single radar can be made assuming a spatially linear, time invariant wind field. The components and derivatives of the wind are estimated by the parameters of a linear regression of the radial velocities on functions of their spatial locations. The accuracy of the wind measurement thus depends on the locations of the radial velocities. The suitability is evaluated of some of the common retrieval techniques for simultaneous measurement of both the vertical and horizontal wind components. The techniques considered for study are fixed beam, azimuthal scanning (VAD) and elevation scanning (VED).
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Consistency problems associated to the improvement of precession-nutation theories
NASA Astrophysics Data System (ADS)
Ferrandiz, J. M.; Escapa, A.; Baenas, T.; Getino, J.; Navarro, J. F.; Belda, S.
2014-12-01
The complexity of the modelling of the rotational motion of the Earth in space has produced that no single theory has been adopted to describe it in full. Hence, it is customary using at least a theory for precession and another one for nutation. The classic approach proceeds by deriving some of the fundamentals parameters from the precession theory at hand, like, e.g. the dynamical ellipticity H, and then using that valuesin the nutation theory. The former IAU1976 precession and IAU1980 nutation theories followed that scheme. Along with the improvement of the accuracy of the determination of EOP (Earth orientation parameters), IAU1980 was superseded by IAU2000, based on the application of the MHB2000 (Mathews et al 2002) transfer function to the previous rigid earth analytical theory REN2000 (Souchay et al 1999). The latter was derived while the precession model IAU1976 was still in force therefore it used the corresponding values for some of the fundamental parameters, as the precession rate, associated to the dynamical ellipticity, and the obliquity of the ecliptic at the reference epoch. The new precession model P03 was adopted as IAU2006. That change introduced some inconsistency since P03 used different values for some of the fundamental parameters that MHB2000 inherited from REN2000. Besides, the derivation of the basic earth parameters of MHB2000 itself comprised a fitted variation of the dynamical ellipticity adopted in the background rigid theory. Due to the strict requirements of accuracy of the present and coming times, the magnitude of the inconsistencies originated by this two-fold approach is no longer negligible as earlier. Some corrections have been proposed by Capitaine et al (2005) and Escapa et al (2014) in order to reach a better level of consistency between precession and nutation theories and parameters. In this presentation we revisit the problem taking into account some of the advances in precession theory not accounted for yet, stemming from the non-rigid nature of the Earth. Special attention is paid to the assessment of the level of consistency between the current IAU precession and nutation models and its impact on the adopted reference values. We suggest potential corrections and possibilities to incorporate theoretical advances and improve accuracy while being compliant with IAU resolutions.
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
Model calibration criteria for estimating ecological flow characteristics
Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William J.; Seibert, Jan; Breuer, Lutz; Kraft, Philipp
2016-01-01
Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
Radial Basis Function Neural Network Application to Power System Restoration Studies
Sadeghkhani, Iman; Ketabi, Abbas; Feuillet, Rene
2012-01-01
One of the most important issues in power system restoration is overvoltages caused by transformer switching. These overvoltages might damage some equipment and delay power system restoration. This paper presents a radial basis function neural network (RBFNN) to study transformer switching overvoltages. To achieve good generalization capability for developed RBFNN, equivalent parameters of the network are added to RBFNN inputs. The developed RBFNN is trained with the worst-case scenario of switching angle and remanent flux and tested for typical cases. The simulated results for a partial of 39-bus New England test system show that the proposed technique can estimate the peak values and duration of switching overvoltages with good accuracy. PMID:22792093
Design Parameters Affecting the Accuracy of Isothermal Thermocouples
1975-01-02
Design Parameters Lead Wire Length intekference Accuracy Askew Installation Tungsten / Rhenium Wire Diameter Trajectory Insulation Thickness Heatshield...Summary ................. 73 A-3 Thermodynamic Properties of Tungsten / Rhenium Therm ocouples ............................ 75 A-4 Thermodynamic Properties...were tungsten / rhenium , chromel/alumel, and iron/constbntan, which covered the 0 to 5000, 0 to 2200, and 0 to I-00°F temperatut- ranges, resoectively. in
ERIC Educational Resources Information Center
Wu, Yi-Fang
2015-01-01
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
Krzysztof, Naus; Aleksander, Nowak
2016-01-01
The article presents a study of the accuracy of estimating the position coordinates of BAUV (Biomimetic Autonomous Underwater Vehicle) by the extended Kalman filter (EKF) method. The fusion of movement parameters measurements and position coordinates fixes was applied. The movement parameters measurements are carried out by on-board navigation devices, while the position coordinates fixes are done by the USBL (Ultra Short Base Line) system. The problem of underwater positioning and the conceptual design of the BAUV navigation system constructed at the Naval Academy (Polish Naval Academy—PNA) are presented in the first part of the paper. The second part consists of description of the evaluation results of positioning accuracy, the genesis of the problem of selecting method for underwater positioning, and the mathematical description of the method of estimating the position coordinates using the EKF method by the fusion of measurements with on-board navigation and measurements obtained with the USBL system. The main part contains a description of experimental research. It consists of a simulation program of navigational parameter measurements carried out during the BAUV passage along the test section. Next, the article covers the determination of position coordinates on the basis of simulated parameters, using EKF and DR methods and the USBL system, which are then subjected to a comparative analysis of accuracy. The final part contains systemic conclusions justifying the desirability of applying the proposed fusion method of navigation parameters for the BAUV positioning. PMID:27537884
Krzysztof, Naus; Aleksander, Nowak
2016-08-15
The article presents a study of the accuracy of estimating the position coordinates of BAUV (Biomimetic Autonomous Underwater Vehicle) by the extended Kalman filter (EKF) method. The fusion of movement parameters measurements and position coordinates fixes was applied. The movement parameters measurements are carried out by on-board navigation devices, while the position coordinates fixes are done by the USBL (Ultra Short Base Line) system. The problem of underwater positioning and the conceptual design of the BAUV navigation system constructed at the Naval Academy (Polish Naval Academy-PNA) are presented in the first part of the paper. The second part consists of description of the evaluation results of positioning accuracy, the genesis of the problem of selecting method for underwater positioning, and the mathematical description of the method of estimating the position coordinates using the EKF method by the fusion of measurements with on-board navigation and measurements obtained with the USBL system. The main part contains a description of experimental research. It consists of a simulation program of navigational parameter measurements carried out during the BAUV passage along the test section. Next, the article covers the determination of position coordinates on the basis of simulated parameters, using EKF and DR methods and the USBL system, which are then subjected to a comparative analysis of accuracy. The final part contains systemic conclusions justifying the desirability of applying the proposed fusion method of navigation parameters for the BAUV positioning.
An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition
NASA Astrophysics Data System (ADS)
Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.
2018-04-01
Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
Racette, Lyne; Chiou, Christine Y.; Hao, Jiucang; Bowd, Christopher; Goldbaum, Michael H.; Zangwill, Linda M.; Lee, Te-Won; Weinreb, Robert N.; Sample, Pamela A.
2009-01-01
Purpose To investigate whether combining optic disc topography and short-wavelength automated perimetry (SWAP) data improves the diagnostic accuracy of relevance vector machine (RVM) classifiers for detecting glaucomatous eyes compared to using each test alone. Methods One eye of 144 glaucoma patients and 68 healthy controls from the Diagnostic Innovations in Glaucoma Study were included. RVM were trained and tested with cross-validation on optimized (backward elimination) SWAP features (thresholds plus age; pattern deviation (PD); total deviation (TD)) and on Heidelberg Retina Tomograph II (HRT) optic disc topography features, independently and in combination. RVM performance was also compared to two HRT linear discriminant functions (LDF) and to SWAP mean deviation (MD) and pattern standard deviation (PSD). Classifier performance was measured by the area under the receiver operating characteristic curves (AUROCs) generated for each feature set and by the sensitivities at set specificities of 75%, 90% and 96%. Results RVM trained on combined HRT and SWAP thresholds plus age had significantly higher AUROC (0.93) than RVM trained on HRT (0.88) and SWAP (0.76) alone. AUROCs for the SWAP global indices (MD: 0.68; PSD: 0.72) offered no advantage over SWAP thresholds plus age, while the LDF AUROCs were significantly lower than RVM trained on the combined SWAP and HRT feature set and on HRT alone feature set. Conclusions Training RVM on combined optimized HRT and SWAP data improved diagnostic accuracy compared to training on SWAP and HRT parameters alone. Future research may identify other combinations of tests and classifiers that can also improve diagnostic accuracy. PMID:19528827
Solving the Rational Polynomial Coefficients Based on L Curve
NASA Astrophysics Data System (ADS)
Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.
2018-05-01
The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.
Optimized SIFTFlow for registration of whole-mount histology to reference optical images
Shojaii, Rushin; Martel, Anne L.
2016-01-01
Abstract. The registration of two-dimensional histology images to reference images from other modalities is an important preprocessing step in the reconstruction of three-dimensional histology volumes. This is a challenging problem because of the differences in the appearances of histology images and other modalities, and the presence of large nonrigid deformations which occur during slide preparation. This paper shows the feasibility of using densely sampled scale-invariant feature transform (SIFT) features and a SIFTFlow deformable registration algorithm for coregistering whole-mount histology images with blockface optical images. We present a method for jointly optimizing the regularization parameters used by the SIFTFlow objective function and use it to determine the most appropriate values for the registration of breast lumpectomy specimens. We demonstrate that tuning the regularization parameters results in significant improvements in accuracy and we also show that SIFTFlow outperforms a previously described edge-based registration method. The accuracy of the histology images to blockface images registration using the optimized SIFTFlow method was assessed using an independent test set of images from five different lumpectomy specimens and the mean registration error was 0.32±0.22 mm. PMID:27774494
NASA Astrophysics Data System (ADS)
Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai
2018-04-01
The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.
A chaos wolf optimization algorithm with self-adaptive variable step-size
NASA Astrophysics Data System (ADS)
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
NASA Astrophysics Data System (ADS)
Perera, Dimuthu
Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.
Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry
2013-01-01
The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258
Results from the Wilkinson Microwave Anisotropy Probe
NASA Technical Reports Server (NTRS)
Komatsu, E.; Bennett, Charles L.; Komatsu, Eiichiro
2015-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) mapped the distribution of temperature and polarization over the entire sky in five microwave frequency bands. These full-sky maps were used to obtain measurements of temperature and polarization anisotropy of the cosmic microwave background with the unprecedented accuracy and precision. The analysis of two-point correlation functions of temperature and polarization data gives determinations of the fundamental cosmological parameters such as the age and composition of the universe, as well as the key parameters describing the physics of inflation, which is further constrained by three-point correlation functions. WMAP observations alone reduced the flat ? cold dark matter (Lambda Cold Dark Matter) cosmological model (six) parameter volume by a factor of > 68, 000 compared with pre-WMAP measurements. The WMAP observations (sometimes in combination with other astrophysical probes) convincingly show the existence of non-baryonic dark matter, the cosmic neutrino background, flatness of spatial geometry of the universe, a deviation from a scale-invariant spectrum of initial scalar fluctuations, and that the current universe is undergoing an accelerated expansion. The WMAP observations provide the strongest ever support for inflation; namely, the structures we see in the universe originate from quantum fluctuations generated during inflation.
State space modeling of time-varying contemporaneous and lagged relations in connectivity maps.
Molenaar, Peter C M; Beltz, Adriene M; Gates, Kathleen M; Wilson, Stephen J
2016-01-15
Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. Published by Elsevier Inc.
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
Adjoint-based Sensitivity of Jet Noise to Near-nozzle Forcing
NASA Astrophysics Data System (ADS)
Chung, Seung Whan; Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan
2017-11-01
Past efforts have used optimal control theory, based on the numerical solution of the adjoint flow equations, to perturb turbulent jets in order to reduce their radiated sound. These efforts have been successful in that sound is reduced, with concomitant changes to the large-scale turbulence structures in the flow. However, they have also been inconclusive, in that the ultimate level of reduction seemed to depend upon the accuracy of the adjoint-based gradient rather than a physical limitation of the flow. The chaotic dynamics of the turbulence can degrade the smoothness of cost functional in the control-parameter space, which is necessary for gradient-based optimization. We introduce a route to overcoming this challenge, in part by leveraging the regularity and accuracy with a dual-consistent, discrete-exact adjoint formulation. We confirm its properties and use it to study the sensitivity and controllability of the acoustic radiation from a simulation of a M = 1.3 turbulent jet, whose statistics matches data. The smoothness of the cost functional over time is quantified by a minimum optimization step size beyond which the gradient cannot have a certain degree of accuracy. Based on this, we achieve a moderate level of sound reduction in the first few optimization steps. This material is based [in part] upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.
Nébouy, David; Hébert, Mathieu; Fournel, Thierry; Larina, Nina; Lesur, Jean-Luc
2015-09-01
Recent color printing technologies based on the principle of revealing colors on pre-functionalized achromatic supports by laser irradiation offer advanced functionalities, especially for security applications. However, for such technologies, the color prediction is challenging, compared to classic ink-transfer printing systems. The spectral properties of the coloring materials modified by the lasers are not precisely known and may strongly vary, depending on the laser settings, in a nonlinear manner. We show in this study, through the example of the color laser marking (CLM) technology, based on laser bleaching of a mixture of pigments, that the combination of an adapted optical reflectance model and learning methods to get the model's parameters enables prediction of the spectral reflectance of any printable color with rather good accuracy. Even though the pigment mixture is formulated from three colored pigments, an analysis of the dimensionality of the spectral space generated by CLM printing, thanks to a principal component analysis decomposition, shows that at least four spectral primaries are needed for accurate spectral reflectance predictions. A polynomial interpolation is then used to relate RGB laser intensities with virtual coordinates of new basis vectors. By studying the influence of the number of calibration patches on the prediction accuracy, we can conclude that a reasonable number of 130 patches are enough to achieve good accuracy in this application.
Dogu, Beril; Yucel, Serap Dalgic; Sag, Sinem Yamac; Bankaoglu, Mujdat; Kuran, Banu
2012-08-01
The aim of this study was to compare the accuracy of blind vs. ultrasonography-guided corticosteroid injections in subacromial impingement syndrome and determine the correlation between accuracy of the injection location and clinical outcome. Forty-six patients with subacromial impingement syndrome were randomized for ultrasonography-guided (group 1, n = 23) and blind corticosteroid injections (group 2, n = 23). Magnetic resonance imaging analysis was performed immediately after the injection. Changes in shoulder range of motion, pain, and shoulder function were recorded. All patients were assessed before the injection and 6 wks after the injection. Accurate injections were performed in 15 (65%) group 1 patients and in 16 (70%) group 2 patients. There was no statistically significant difference in the injection location accuracy between the two groups (P > 0.05). At the end of the sixth week, regardless of whether the injected mixture was found in the subacromial region or not, all of the patients showed improvements in all of the parameters evaluated (P < 0.05). Blind injections performed in the subacromial region by experienced individuals were reliably accurate and could therefore be given in daily routines. Corticosteroid injections in the subacromial region were very effective in improving the pain and functional status of patients with subacromial impingement syndrome during the short-term follow-up.
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Research on intrusion detection based on Kohonen network and support vector machine
NASA Astrophysics Data System (ADS)
Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi
2018-05-01
In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
Lungu, Angela; Swift, Andrew J; Capener, David; Kiely, David; Hose, Rod; Wild, Jim M
2016-06-01
Accurately identifying patients with pulmonary hypertension (PH) using noninvasive methods is challenging, and right heart catheterization (RHC) is the gold standard. Magnetic resonance imaging (MRI) has been proposed as an alternative to echocardiography and RHC in the assessment of cardiac function and pulmonary hemodynamics in patients with suspected PH. The aim of this study was to assess whether machine learning using computational modeling techniques and image-based metrics of PH can improve the diagnostic accuracy of MRI in PH. Seventy-two patients with suspected PH attending a referral center underwent RHC and MRI within 48 hours. Fifty-seven patients were diagnosed with PH, and 15 had no PH. A number of functional and structural cardiac and cardiovascular markers derived from 2 mathematical models and also solely from MRI of the main pulmonary artery and heart were integrated into a classification algorithm to investigate the diagnostic utility of the combination of the individual markers. A physiological marker based on the quantification of wave reflection in the pulmonary artery was shown to perform best individually, but optimal diagnostic performance was found by the combination of several image-based markers. Classifier results, validated using leave-one-out cross validation, demonstrated that combining computation-derived metrics reflecting hemodynamic changes in the pulmonary vasculature with measurement of right ventricular morphology and function, in a decision support algorithm, provides a method to noninvasively diagnose PH with high accuracy (92%). The high diagnostic accuracy of these MRI-based model parameters may reduce the need for RHC in patients with suspected PH.
NASA Astrophysics Data System (ADS)
Li, Xuejian; Mao, Fangjie; Du, Huaqiang; Zhou, Guomo; Xu, Xiaojun; Han, Ning; Sun, Shaobo; Gao, Guolong; Chen, Liang
2017-04-01
Subtropical forest ecosystems play essential roles in the global carbon cycle and in carbon sequestration functions, which challenge the traditional understanding of the main functional areas of carbon sequestration in the temperate forests of Europe and America. The leaf area index (LAI) is an important biological parameter in the spatiotemporal simulation of the carbon cycle, and it has considerable significance in carbon cycle research. Dynamic retrieval based on remote sensing data is an important method with which to obtain large-scale high-accuracy assessments of LAI. This study developed an algorithm for assimilating LAI dynamics based on an integrated ensemble Kalman filter using MODIS LAI data, MODIS reflectance data, and canopy reflectance data modeled by PROSAIL, for three typical types of subtropical forest (Moso bamboo forest, Lei bamboo forest, and evergreen and deciduous broadleaf forest) in China during 2014-2015. There were some errors of assimilation in winter, because of the bad data quality of the MODIS product. Overall, the assimilated LAI well matched the observed LAI, with R2 of 0.82, 0.93, and 0.87, RMSE of 0.73, 0.49, and 0.42, and aBIAS of 0.50, 0.23, and 0.03 for Moso bamboo forest, Lei bamboo forest, and evergreen and deciduous broadleaf forest, respectively. The algorithm greatly decreased the uncertainty of the MODIS LAI in the growing season and it improved the accuracy of the MODIS LAI. The advantage of the algorithm is its use of biophysical parameters (e.g., measured LAI) in the LAI assimilation, which makes it possible to assimilate long-term MODIS LAI time series data, and to provide high-accuracy LAI data for the study of carbon cycle characteristics in subtropical forest ecosystems.