Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
State and parameter estimation of the heat shock response system using Kalman and particle filters.
Liu, Xin; Niranjan, Mahesan
2012-06-01
Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
Monaural room acoustic parameters from music and speech.
Kendrick, Paul; Cox, Trevor J; Li, Francis F; Zhang, Yonggang; Chambers, Jonathon A
2008-07-01
This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.
NASA Astrophysics Data System (ADS)
Perera, Dimuthu
Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.
Song, Yong Sub; Choi, Seung Hong; Park, Chul-Kee; Yi, Kyung Sik; Lee, Woong Jae; Yun, Tae Jin; Kim, Tae Min; Lee, Se-Hoon; Kim, Ji-Hoon; Sohn, Chul-Ho; Park, Sung-Hye; Kim, Il Han; Jahng, Geon-Ho; Chang, Kee-Hyun
2013-01-01
The purpose of this study was to differentiate true progression from pseudoprogression of glioblastomas treated with concurrent chemoradiotherapy (CCRT) with temozolomide (TMZ) by using histogram analysis of apparent diffusion coefficient (ADC) and normalized cerebral blood volume (nCBV) maps. Twenty patients with histopathologically proven glioblastoma who had received CCRT with TMZ underwent perfusion-weighted imaging and diffusion-weighted imaging (b = 0, 1000 sec/mm(2)). The corresponding nCBV and ADC maps for the newly visible, entirely enhancing lesions were calculated after the completion of CCRT with TMZ. Two observers independently measured the histogram parameters of the nCBV and ADC maps. The histogram parameters between the true progression group (n = 10) and the pseudoprogression group (n = 10) were compared by use of an unpaired Student's t test and subsequent multivariable stepwise logistic regression analysis to determine the best predictors for the differential diagnosis between the two groups. Receiver operating characteristic analysis was employed to determine the best cutoff values for the histogram parameters that proved to be significant predictors for differentiating true progression from pseudoprogression. Intraclass correlation coefficient was used to determine the level of inter-observer reliability for the histogram parameters. The 5th percentile value (C5) of the cumulative ADC histograms was a significant predictor for the differential diagnosis between true progression and pseudoprogression (p = 0.044 for observer 1; p = 0.011 for observer 2). Optimal cutoff values of 892 × 10(-6) mm(2)/sec for observer 1 and 907 × 10(-6) mm(2)/sec for observer 2 could help differentiate between the two groups with a sensitivity of 90% and 80%, respectively, a specificity of 90% and 80%, respectively, and an area under the curve of 0.880 and 0.840, respectively. There was no other significant differentiating parameter on the nCBV histograms. Inter-observer reliability was excellent or good for all histogram parameters (intraclass correlation coefficient range: 0.70-0.99). The C5 of the cumulative ADC histogram can be a promising parameter for the differentiation of true progression from pseudoprogression of newly visible, entirely enhancing lesions after CCRT with TMZ for glioblastomas.
Song, Yong Sub; Park, Chul-Kee; Yi, Kyung Sik; Lee, Woong Jae; Yun, Tae Jin; Kim, Tae Min; Lee, Se-Hoon; Kim, Ji-Hoon; Sohn, Chul-Ho; Park, Sung-Hye; Kim, Il Han; Jahng, Geon-Ho; Chang, Kee-Hyun
2013-01-01
Objective The purpose of this study was to differentiate true progression from pseudoprogression of glioblastomas treated with concurrent chemoradiotherapy (CCRT) with temozolomide (TMZ) by using histogram analysis of apparent diffusion coefficient (ADC) and normalized cerebral blood volume (nCBV) maps. Materials and Methods Twenty patients with histopathologically proven glioblastoma who had received CCRT with TMZ underwent perfusion-weighted imaging and diffusion-weighted imaging (b = 0, 1000 sec/mm2). The corresponding nCBV and ADC maps for the newly visible, entirely enhancing lesions were calculated after the completion of CCRT with TMZ. Two observers independently measured the histogram parameters of the nCBV and ADC maps. The histogram parameters between the true progression group (n = 10) and the pseudoprogression group (n = 10) were compared by use of an unpaired Student's t test and subsequent multivariable stepwise logistic regression analysis to determine the best predictors for the differential diagnosis between the two groups. Receiver operating characteristic analysis was employed to determine the best cutoff values for the histogram parameters that proved to be significant predictors for differentiating true progression from pseudoprogression. Intraclass correlation coefficient was used to determine the level of inter-observer reliability for the histogram parameters. Results The 5th percentile value (C5) of the cumulative ADC histograms was a significant predictor for the differential diagnosis between true progression and pseudoprogression (p = 0.044 for observer 1; p = 0.011 for observer 2). Optimal cutoff values of 892 × 10-6 mm2/sec for observer 1 and 907 × 10-6 mm2/sec for observer 2 could help differentiate between the two groups with a sensitivity of 90% and 80%, respectively, a specificity of 90% and 80%, respectively, and an area under the curve of 0.880 and 0.840, respectively. There was no other significant differentiating parameter on the nCBV histograms. Inter-observer reliability was excellent or good for all histogram parameters (intraclass correlation coefficient range: 0.70-0.99). Conclusion The C5 of the cumulative ADC histogram can be a promising parameter for the differentiation of true progression from pseudoprogression of newly visible, entirely enhancing lesions after CCRT with TMZ for glioblastomas. PMID:23901325
2016-01-27
bias of the estimator U, bias(U), the difference between this estimator’s expected value and the true value of the parameter being estimated, i.e...biasðUÞ ¼ EðU yÞ ¼ EðUÞ y ð9Þ Based on the above definition, an unbiased estimator is one whose expected value is equal to the true value being...equal to 0.94 (p- value < 0.05), if we con- sider the pure ER network model as our baseline, and 0.31 (p- value < 0.05), if we control for the home
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
NASA Astrophysics Data System (ADS)
Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.
2005-12-01
This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing
Regression dilution in the proportional hazards model.
Hughes, M D
1993-12-01
The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.
Disentangling Disadvantage: Can We Distinguish Good Teaching from Classroom Composition?
Zamarro, Gema; Engberg, John; Saavedra, Juan Esteban; Steele, Jennifer
This paper investigates the use of teacher value-added estimates to assess the distribution of effective teaching across students of varying socioeconomic disadvantage in the presence of classroom composition effects. We examine, via simulations, how accurately commonly-used teacher-value added estimators recover the rank correlation between true and estimated teacher effects and a parameter representing the distribution of effective teaching. We consider various scenarios of teacher assignment, within-teacher variability in classroom composition, importance of classroom composition effects, and presence of student unobserved heterogeneity. No single model recovers without bias estimates of the distribution parameter in all the scenarios we consider. Models that rank teacher effectiveness most accurately do not necessarily recover distribution parameter estimates with less bias. Since true teacher sorting in real data is seldom known, we recommend that analysts incorporate contextual information into their decisions about model choice and we offer some guidance on how to do so.
The Addition of Enhanced Capabilities to NATO GMTIF STANAG 4607 to Support RADARSAT-2 GMTI Data
2007-12-01
However, the cost is a loss in the accuracy of the position specification and its dependence on the particular ellipsoid and/or geoid models used in...platform provides these parameters. Table B-3. Reference Coordinate Systems COORDINATE SYSTEM VALUE Unidentified 0 GEI: Geocentric Equatorial...Inertial, also known as True Equator and True Equinox of Date, True of Date (TOD), ECI, or GCI 1 J2000: Geocentric Equatorial Inertial for epoch J2000.0
Nam, J G; Kang, K M; Choi, S H; Lim, W H; Yoo, R-E; Kim, J-H; Yun, T J; Sohn, C-H
2017-12-01
Glioblastoma is the most common primary brain malignancy and differentiation of true progression from pseudoprogression is clinically important. Our purpose was to compare the diagnostic performance of dynamic contrast-enhanced pharmacokinetic parameters using the fixed T1 and measured T1 on differentiating true from pseudoprogression of glioblastoma after chemoradiation with temozolomide. This retrospective study included 37 patients with histopathologically confirmed glioblastoma with new enhancing lesions after temozolomide chemoradiation defined as true progression ( n = 15) or pseudoprogression ( n = 22). Dynamic contrast-enhanced pharmacokinetic parameters, including the volume transfer constant, the rate transfer constant, the blood plasma volume per unit volume, and the extravascular extracellular space per unit volume, were calculated by using both the fixed T1 of 1000 ms and measured T1 by using the multiple flip-angle method. Intra- and interobserver reproducibility was assessed by using the intraclass correlation coefficient. Dynamic contrast-enhanced pharmacokinetic parameters were compared between the 2 groups by using univariate and multivariate analysis. The diagnostic performance was evaluated by receiver operating characteristic analysis and leave-one-out cross validation. The intraclass correlation coefficients of all the parameters from both T1 values were fair to excellent (0.689-0.999). The volume transfer constant and rate transfer constant from the fixed T1 were significantly higher in patients with true progression ( P = .048 and .010, respectively). Multivariate analysis revealed that the rate transfer constant from the fixed T1 was the only independent variable (OR, 1.77 × 10 5 ) and showed substantial diagnostic power on receiver operating characteristic analysis (area under the curve, 0.752; P = .002). The sensitivity and specificity on leave-one-out cross validation were 73.3% (11/15) and 59.1% (13/20), respectively. The dynamic contrast-enhanced parameter of rate transfer constant from the fixed T1 acted as a preferable marker to differentiate true progression from pseudoprogression. © 2017 by American Journal of Neuroradiology.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts
2015-07-01
uncertainty analyses throughout the lifecycle of planning, designing, and operating of Civil Works flood risk management projects as described in...value 95% of the time. In the frequentist approach to PE, model parameters area regarded as having true values, and their estimate is based on the...in catchment models. 1. Evaluating parameter uncertainty. Water Resources Research 19(5):1151–1172. Lee, P. M. 2012. Bayesian statistics: An
Bias in error estimation when using cross-validation for model selection.
Varma, Sudhir; Simon, Richard
2006-02-23
Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
Theoretical prediction of Grüneisen parameter for SiO{sub 2}.TiO{sub 2} bulk metallic glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Chandra K.; Pandey, Brijesh K., E-mail: bkpmmmec11@gmail.com; Pandey, Anjani K.
2016-05-23
The Grüneisen parameter (γ) is very important to decide the limitations for the prediction of thermoelastic properties of bulk metallic glasses. It can be defined in terms of microscopic and macroscopic parameters of the material in which former is based on vibrational frequencies of atoms in the material while later is closely related to its thermodynamic properties. Different formulation and equation of states are used by the pioneer researchers of this field to predict the true sense of Gruneisen parameter for BMG but for SiO{sub 2}.TiO{sub 2} very few and insufficient information is available till now. In the present workmore » we have tested the validity of two different isothermal EOS viz. Poirrior-Tarantola EOS and Usual-Tait EOS to predict the true value of Gruneisen parameter for SiO{sub 2}.TiO{sub 2} as a function of compression. Using different thermodynamic limitations related to the material constraints and analyzing obtained result it is concluded that the Poirrior-Tarantola EOS gives better numeric values of Grüneisen parameter (γ) for SiO{sub 2}.TiO{sub 2} BMG.« less
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A
2012-02-01
The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.
Estimating the Proportion of True Null Hypotheses Using the Pattern of Observed p-values
Tong, Tiejun; Feng, Zeny; Hilton, Julia S.; Zhao, Hongyu
2013-01-01
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1 − λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance. PMID:24078762
Estimating the Proportion of True Null Hypotheses Using the Pattern of Observed p-values.
Tong, Tiejun; Feng, Zeny; Hilton, Julia S; Zhao, Hongyu
2013-01-01
Estimating the proportion of true null hypotheses, π 0 , has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π 0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π 0 by incorporating the distribution pattern of the observed p -values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p -values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1 - λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.
Jung, Su Jin
2016-01-01
Purpose We investigated whether C-reactive protein (CRP) levels, urine protein-creatinine ratio (uProt/Cr), and urine electrolytes can be useful for discriminating acute pyelonephritis (APN) from other febrile illnesses or the presence of a cortical defect on 99mTc dimercaptosuccinic acid (DMSA) scanning (true APN) from its absence in infants with febrile urinary tract infection (UTI). Materials and Methods We examined 150 infants experiencing their first febrile UTI and 100 controls with other febrile illnesses consecutively admitted to our hospital from January 2010 to December 2012. Blood (CRP, electrolytes, Cr) and urine tests [uProt/Cr, electrolytes, and sodium-potassium ratio (uNa/K)] were performed upon admission. All infants with UTI underwent DMSA scans during admission. All data were compared between infants with UTI and controls and between infants with or without a cortical defect on DMSA scans. Using multiple logistic regression analysis, the ability of the parameters to predict true APN was analyzed. Results CRP levels and uProt/Cr were significantly higher in infants with true APN than in controls. uNa levels and uNa/K were significantly lower in infants with true APN than in controls. CRP levels and uNa/K were relevant factors for predicting true APN. The method using CRP levels, u-Prot/Cr, u-Na levels, and uNa/K had a sensitivity of 94%, specificity of 65%, positive predictive value of 60%, and negative predictive value of 95% for predicting true APN. Conclusion We conclude that these parameters are useful for discriminating APN from other febrile illnesses or discriminating true APN in infants with febrile UTI. PMID:26632389
Regression without truth with Markov chain Monte-Carlo
NASA Astrophysics Data System (ADS)
Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga
2017-03-01
Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.
Carvalho, Luis Alberto
2005-02-01
Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.
A proof of Wright's conjecture
NASA Astrophysics Data System (ADS)
van den Berg, Jan Bouwe; Jaquette, Jonathan
2018-06-01
Wright's conjecture states that the origin is the global attractor for the delay differential equation y‧ (t) = - αy (t - 1) [ 1 + y (t) ] for all α ∈ (0, π/2 ] when y (t) > - 1. This has been proven to be true for a subset of parameter values α. We extend the result to the full parameter range α ∈ (0, π/2 ], and thus prove Wright's conjecture to be true. Our approach relies on a careful investigation of the neighborhood of the Hopf bifurcation occurring at α = π/2. This analysis fills the gap left by complementary work on Wright's conjecture, which covers parameter values further away from the bifurcation point. Furthermore, we show that the branch of (slowly oscillating) periodic orbits originating from this Hopf bifurcation does not have any subsequent bifurcations (and in particular no folds) for α ∈ (π/2, π/2 + 6.830 ×10-3 ]. When combined with other results, this proves that the branch of slowly oscillating solutions that originates from the Hopf bifurcation at α = π/2 is globally parametrized by α > π/2.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
Parrish, Rudolph S.; Smith, Charles N.
1990-01-01
A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.
Global SWOT Data Assimilation of River Hydrodynamic Model; the Twin Simulation Test of CaMa-Flood
NASA Astrophysics Data System (ADS)
Ikeshima, D.; Yamazaki, D.; Kanae, S.
2016-12-01
CaMa-Flood is a global scale model for simulating hydrodynamics in large scale rivers. It can simulate river hydrodynamics such as river discharge, flooded area, water depth and so on by inputting water runoff derived from land surface model. Recently many improvements at parameters or terrestrial data are under process to enhance the reproducibility of true natural phenomena. However, there are still some errors between nature and simulated result due to uncertainties in each model. SWOT (Surface water and Ocean Topography) is a satellite, which is going to be launched in 2021, can measure open water surface elevation. SWOT observed data can be used to calibrate hydrodynamics model at river flow forecasting and is expected to improve model's accuracy. Combining observation data into model to calibrate is called data assimilation. In this research, we developed data-assimilated river flow simulation system in global scale, using CaMa-Flood as river hydrodynamics model and simulated SWOT as observation data. Generally at data assimilation, calibrating "model value" with "observation value" makes "assimilated value". However, the observed data of SWOT satellite will not be available until its launch in 2021. Instead, we simulated the SWOT observed data using CaMa-Flood. Putting "pure input" into CaMa-Flood produce "true water storage". Extracting actual daily swath of SWOT from "true water storage" made simulated observation. For "model value", we made "disturbed water storage" by putting "noise disturbed input" to CaMa-Flood. Since both "model value" and "observation value" are made by same model, we named this twin simulation. At twin simulation, simulated observation of "true water storage" is combined with "disturbed water storage" to make "assimilated value". As the data assimilation method, we used ensemble Kalman filter. If "assimilated value" is closer to "true water storage" than "disturbed water storage", the data assimilation can be marked effective. Also by changing the input disturbance of "disturbed water storage", acceptable rate of uncertainty at the input may be discussed.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.
NASA Technical Reports Server (NTRS)
Waldron, W. L.
1985-01-01
The observed X-ray emission from early-type stars can be explained by the recombination stellar wind model (or base coronal model). The model predicts that the true X-ray luminosity from the base coronal zone can be 10 to 1000 times greater than the observed X-ray luminosity. From the models, scaling laws were found for the true and observed X-ray luminosities. These scaling laws predict that the ratio of the observed X-ray luminosity to the bolometric luminosity is functionally dependent on several stellar parameters. When applied to several other O and B stars, it is found that the values of the predicted ratio agree very well with the observed values.
Kielar, Kayla N; Mok, Ed; Hsu, Annie; Wang, Lei; Luxton, Gary
2012-10-01
The dosimetric leaf gap (DLG) in the Varian Eclipse treatment planning system is determined during commissioning and is used to model the effect of the rounded leaf-end of the multileaf collimator (MLC). This parameter attempts to model the physical difference between the radiation and light field and account for inherent leakage between leaf tips. With the increased use of single fraction high dose treatments requiring larger monitor units comes an enhanced concern in the accuracy of leakage calculations, as it accounts for much of the patient dose. This study serves to verify the dosimetric accuracy of the algorithm used to model the rounded leaf effect for the TrueBeam STx, and describes a methodology for determining best-practice parameter values, given the novel capabilities of the linear accelerator such as flattening filter free (FFF) treatments and a high definition MLC (HDMLC). During commissioning, the nominal MLC position was verified and the DLG parameter was determined using MLC-defined field sizes and moving gap tests, as is common in clinical testing. Treatment plans were created, and the DLG was optimized to achieve less than 1% difference between measured and calculated dose. The DLG value found was tested on treatment plans for all energies (6 MV, 10 MV, 15 MV, 6 MV FFF, 10 MV FFF) and modalities (3D conventional, IMRT, conformal arc, VMAT) available on the TrueBeam STx. The DLG parameter found during the initial MLC testing did not match the leaf gap modeling parameter that provided the most accurate dose delivery in clinical treatment plans. Using the physical leaf gap size as the DLG for the HDMLC can lead to 5% differences in measured and calculated doses. Separate optimization of the DLG parameter using end-to-end tests must be performed to ensure dosimetric accuracy in the modeling of the rounded leaf ends for the Eclipse treatment planning system. The difference in leaf gap modeling versus physical leaf gap dimensions is more pronounced in the more recent versions of Eclipse for both the HDMLC and the Millennium MLC. Once properly commissioned and tested using a methodology based on treatment plan verification, Eclipse is able to accurately model radiation dose delivered for SBRT treatments using the TrueBeam STx.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.
2012-04-01
This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.
A qubit coupled with confined phonons: The interplay between true and fake decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pouthier, Vincent
2013-08-07
The decoherence of a qubit coupled with the phonons of a finite-size lattice is investigated. The confined phonons no longer behave as a reservoir. They remain sensitive to the qubit so that the origin of the decoherence is twofold. First, a qubit-phonon entanglement yields an incomplete true decoherence. Second, the qubit renormalizes the phonon frequency resulting in fake decoherence when a thermal average is performed. To account for the initial thermalization of the lattice, the qua- ntum Langevin theory is applied so that the phonons are viewed as an open system coupled with a thermal bath of harmonic oscillators. Consequently,more » it is shown that the finite lifetime of the phonons does not modify fake decoherence but strongly affects true decoherence. Depending on the values of the model parameters, the interplay between fake and true decoherence yields a very rich dynamics with various regimes.« less
Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing
2016-12-01
The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
Hardiansyah, Deni; Attarwala, Ali Asgar; Kletting, Peter; Mottaghy, Felix M; Glatting, Gerhard
2017-10-01
To investigate the accuracy of predicted time-integrated activity coefficients (TIACs) in peptide-receptor radionuclide therapy (PRRT) using simulated dynamic PET data and a physiologically based pharmacokinetic (PBPK) model. PBPK parameters were estimated using biokinetic data of 15 patients after injection of (152±15)MBq of 111 In-DTPAOC (total peptide amount (5.78±0.25)nmol). True mathematical phantoms of patients (MPPs) were the PBPK model with the estimated parameters. Dynamic PET measurements were simulated as being done after bolus injection of 150MBq 68 Ga-DOTATATE using the true MPPs. Dynamic PET scans around 35min p.i. (P 1 ), 4h p.i. (P 2 ) and the combination of P 1 and P 2 (P 3 ) were simulated. Each measurement was simulated with four frames of 5min each and 2 bed positions. PBPK parameters were fitted to the PET data to derive the PET-predicted MPPs. Therapy was simulated assuming an infusion of 5.1GBq of 90 Y-DOTATATE over 30min in both true and PET-predicted MPPs. TIACs of simulated therapy were calculated, true MPPs (true TIACs) and predicted MPPs (predicted TIACs) followed by the calculation of variabilities v. For P 1 and P 2 the population variabilities of kidneys, liver and spleen were acceptable (v<10%). For the tumours and the remainders, the values were large (up to 25%). For P 3 , population variabilities for all organs including the remainder further improved, except that of the tumour (v>10%). Treatment planning of PRRT based on dynamic PET data seems possible for the kidneys, liver and spleen using a PBPK model and patient specific information. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Evaluation of PeneloPET Simulations of Biograph PET/CT Scanners
NASA Astrophysics Data System (ADS)
Abushab, K. M.; Herraiz, J. L.; Vicente, E.; Cal-González, J.; España, S.; Vaquero, J. J.; Jakoby, B. W.; Udías, J. M.
2016-06-01
Monte Carlo (MC) simulations are widely used in positron emission tomography (PET) for optimizing detector design, acquisition protocols, and evaluating corrections and reconstruction methods. PeneloPET is a MC code based on PENELOPE, for PET simulations which considers detector geometry, acquisition electronics and materials, and source definitions. While PeneloPET has been successfully employed and validated with small animal PET scanners, it required a proper validation with clinical PET scanners including time-of-flight (TOF) information. For this purpose, we chose the family of Biograph PET/CT scanners: the Biograph True-Point (B-TP), Biograph True-Point with TrueV (B-TPTV) and the Biograph mCT. They have similar block detectors and electronics, but a different number of rings and configuration. Some effective parameters of the simulations, such as the dead-time and the size of the reflectors in the detectors, were adjusted to reproduce the sensitivity and noise equivalent count (NEC) rate of the B-TPTV scanner. These parameters were then used to make predictions of experimental results such as sensitivity, NEC rate, spatial resolution, and scatter fraction (SF), from all the Biograph scanners and some variations of them (energy windows and additional rings of detectors). Predictions agree with the measured values for the three scanners, within 7% (sensitivity and NEC rate) and 5% (SF). The resolution obtained for the B-TPTV is slightly better (10%) than the experimental values. In conclusion, we have shown that PeneloPET is suitable for simulating and investigating clinical systems with good accuracy and short computational time, though some effort tuning of a few parameters of the scanners modeled may be needed in case that the full details of the scanners studied are not available.
Kitamura, Ryunosuke; Inagaki, Tetsuya; Tsuchikawa, Satoru
2016-02-22
The true absorption coefficient (μa) and reduced scattering coefficient (μ´s) of the cell wall substance in Douglas fir were determined using time-of-flight near infrared spectroscopy. Samples were saturated with hexane, toluene or quinolone to minimize the multiple reflections of light on the boundary between pore-cell wall substance in wood. μ´s exhibited its minimum value when the wood was saturated with toluene because the refractive index of toluene is close to that of the wood cell wall substance. The optical parameters of the wood cell wall substance calculated were μa = 0.030 mm(-1) and μ´s= 18.4 mm(-1). Monte Carlo simulations using these values were in good agreement with the measured time-resolved transmittance profiles.
A modified exponential behavioral economic demand model to better describe consumption data.
Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K
2015-12-01
Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Coleman-de Luccia instanton in dRGT massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ying-li; Saito, Ryo; Yeom, Dong-han
2014-02-01
We study the Coleman-de Luccia (CDL) instanton characterizing the tunneling from a false vacuum to the true vacuum in a semi-classical way in dRGT (deRham-Gabadadze-Tolley) massive gravity theory, and evaluate the dependence of the tunneling rate on the model parameters. It is found that provided with the same physical Hubble parameters for the true vacuum H{sub T} and the false vacuum H{sub F} as in General Relativity (GR), the thin-wall approximation method implies the same tunneling rate as GR. However, deviations of tunneling rate from GR arise when one goes beyond the thin-wall approximation and they change monotonically until themore » Hawking-Moss (HM) case. Moreover, under the thin-wall approximation, the HM process may dominate over the CDL one if the value for the graviton mass is larger than the inverse of the radius of the bubble.« less
BIASES IN PHYSICAL PARAMETER ESTIMATES THROUGH DIFFERENTIAL LENSING MAGNIFICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Er Xinzhong; Ge Junqiang; Mao Shude, E-mail: xer@nao.cas.cn
2013-06-20
We study the lensing magnification effect on background galaxies. Differential magnification due to different magnifications of different source regions of a galaxy will change the lensed composite spectra. The derived properties of the background galaxies are therefore biased. For simplicity, we model galaxies as a superposition of an axis-symmetric bulge and a face-on disk in order to study the differential magnification effect on the composite spectra. We find that some properties derived from the spectra (e.g., velocity dispersion, star formation rate, and metallicity) are modified. Depending on the relative positions of the source and the lens, the inferred results canmore » be either over- or underestimates of the true values. In general, for an extended source at strong lensing regions with high magnifications, the inferred physical parameters (e.g., metallicity) can be strongly biased. Therefore, detailed lens modeling is necessary to obtain the true properties of the lensed galaxies.« less
Hounsfield unit values of retropharyngeal abscess-like lesions seen in Kawasaki disease.
Sasaki, Toru; Miyata, Rie; Hatai, Yoshiho; Makita, Kohzoh; Tsunoda, Koichi
2014-04-01
Retropharyngeal abscess-like lesions are occasionally seen in computed tomography (CT) imaging of patients with Kawasaki disease (KD) and these patients often undergo unnecessary surgery. We could distinguish the lesions from true abscesses by measuring their Hounsfield unit values (HUs). To distinguish the retropharyngeal abscess-like lesions from true abscesses without any surgical procedure. We investigated six cases of KD showing such lesions on CTs, both with and without contrast enhancement (CE). We measured the HUs of those lesions and compared them with those of 10 true abscesses as controls. Abscess-like lesions of KD were well enhanced by CE, whereas abscesses showed virtually no enhancement. The mean HU in the six KD cases was 20.0 ± 4.65 (mean ± SD) on plain CTs and 35.6 ± 4.49 on contrast CTs. In abscesses, it was 30.3 ± 4.42 on plain CTs and 30.3 ± 3.57 on contrast CTs. The difference in HU values [(HU on contrast CT) - (HU on plain CT)] was defined as ΔHU. The mean ΔHU was 15.6 ± 5.36 in the six KD lesions and 0.0 ± 2.93 in abscesses, with statistical significance of p < 0.0001 by Student's t test. Thus, ΔHU value may potentially be a useful parameter for their distinction.
Bayesian performance metrics of binary sensors in homeland security applications
NASA Astrophysics Data System (ADS)
Jannson, Tomasz P.; Forrester, Thomas C.
2008-04-01
Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
NASA Astrophysics Data System (ADS)
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Wan, L.; Guo, Z. H.
Isothermal compression experiment of AZ80 magnesium alloy was conducted by Gleeble thermo-mechanical simulator in order to quantitatively investigate the work hardening (WH), strain rate sensitivity (SRS) and temperature sensitivity (TS) during hot processing of magnesium alloys. The WH, SRS and TS were described by Zener-Hollomon parameter (Z) coupling of deformation parameters. The relationships between WH rate and true strain as well as true stress were derived from Kocks-Mecking dislocation model and validated by our measurement data. The slope defined through the linear relationship of WH rate and true stress was only related to the annihilation coefficient Ω. Obvious WH behaviormore » could be exhibited at a higher Z condition. Furthermore, we have identified the correlation between the microstructural evolution including β-Mg17Al12 precipitation and the SRS and TS variations. Intensive dynamic recrystallization and homogeneous distribution of β-Mg17Al12 precipitates resulted in greater SRS coefficient at higher temperature. The deformation heat effect and β-Mg17Al12 precipitate content can be regarded as the major factors determining the TS behavior. At low Z condition, the SRS becomes stronger, in contrast to the variation of TS. The optimum hot processing window was validated based on the established SRS and TS values distribution maps for AZ80 magnesium alloy.« less
Aerodynamics of the pseudo-glottis.
Kotby, M N; Hegazi, M A; Kamal, I; Gamal El Dien, N; Nassar, J
2009-01-01
The aim of this work is to study the hitherto unclear aerodynamic parameters of the pseudo-glottis following total laryngectomy. These parameters include airflow rate, sub-pseudo-glottic pressure (SubPsG), efficiency and resistance, as well as sound pressure level (SPL). Eighteen male patients who have undergone total laryngectomy, with an age range from 54 to 72 years, were investigated in this study. All tested patients were fluent esophageal 'voice' speakers utilizing tracheo-esophageal prosthesis. The airflow rate, SubPsG and SPL were measured. The results showed that the mean value of the airflow rate was 53 ml/s, the SubPsG pressure was 13 cm H(2)O, while the SPL was 66 dB. The normative data obtained from the true glottis in healthy age-matched subjects are 89 ml/s, 7.9 cm H(2)O and 70 dB, respectively. Other aerodynamic indices were calculated and compared to the data obtained from the true glottis. Such a comparison of the pseudo-glottic aerodynamic data to the data of the true glottis gives an insight into the mechanism of action of the pseudo-glottis. The data obtained suggests possible clinical applications in pseudo-voice training. Copyright 2009 S. Karger AG, Basel.
Multiple robustness in factorized likelihood models.
Molina, J; Rotnitzky, A; Sued, M; Robins, J M
2017-09-01
We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.
Giménez, Beatriz; Pradíes, Guillermo; Martínez-Rus, Francisco; Özcan, Mutlu
2015-01-01
To evaluate the accuracy of two digital impression systems based on the same technology but different postprocessing correction modes of customized software, with consideration of several clinical parameters. A maxillary master model with six implants located in the second molar, second premolar, and lateral incisor positions was fitted with six cylindrical scan bodies. Scan bodies were placed at different angulations or depths apical to the gingiva. Two experienced and two inexperienced operators performed scans with either 3D Progress (MHT) or ZFX Intrascan (Zimmer Dental). Five different distances between implants (scan bodies) were measured, yielding five data points per impression and 100 per impression system. Measurements made with a high-accuracy three-dimensional coordinate measuring machine (CMM) of the master model acted as the true values. The values obtained from the digital impressions were subtracted from the CMM values to identify the deviations. The differences between experienced and inexperienced operators and implant angulation and depth were compared statistically. Experience of the operator, implant angulation, and implant depth were not associated with significant differences in deviation from the true values with both 3D Progress and ZFX Intrascan. Accuracy in the first scanned quadrant was significantly better with 3D Progress, but ZFX Intrascan presented better accuracy in the full arch. Neither of the two systems tested would be suitable for digital impression of multiple-implant prostheses. Because of the errors, further development of both systems is required.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
On the Enthalpy and Entropy of Point Defect Formation in Crystals
NASA Astrophysics Data System (ADS)
Kobelev, N. P.; Khonik, V. A.
2018-03-01
A standard way to determine the formation enthalpy H and entropy S of point defect formation in crystals consists in the application of the Arrhenius equation for the defect concentration. In this work, we show that a formal use of this method actually gives the effective (apparent) values of these quantities, which appear to be significantly overestimated. The underlying physical reason lies in temperature-dependent formation enthalpy of the defects, which is controlled by temperature dependence of the elastic moduli. We present an evaluation of the "true" H- and S-values for aluminum, which are derived on the basis of experimental data by taking into account temperature dependence of the formation enthalpy related to temperature dependence of the elastic moduli. The knowledge of the "true" activation parameters is needed for a correct calculation of the defect concentration constituting thus an issue of major importance for different fundamental and application issues of condensed matter physics and chemistry.
NASA Astrophysics Data System (ADS)
Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun
2018-05-01
Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)<0.2 mag. However, with larger input E(B-V), the biases of the output parameter do not converge to zero even at the highest S/N and are strongly affected by the shape of the error spectra. This effect is particularly dramatic for youngest age (t = 108yr), for which all population parameters can be strongly different from the input values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".
Accuracy and Reliability Assessment of CT and MR Perfusion Analysis Software Using a Digital Phantom
Christensen, Soren; Sasaki, Makoto; Østergaard, Leif; Shirato, Hiroki; Ogasawara, Kuniaki; Wintermark, Max; Warach, Steven
2013-01-01
Purpose: To design a digital phantom data set for computed tomography (CT) perfusion and perfusion-weighted imaging on the basis of the widely accepted tracer kinetic theory in which the true values of cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer arrival delay are known and to evaluate the accuracy and reliability of postprocessing programs using this digital phantom. Materials and Methods: A phantom data set was created by generating concentration-time curves reflecting true values for CBF (2.5–87.5 mL/100 g per minute), CBV (1.0–5.0 mL/100 g), MTT (3.4–24 seconds), and tracer delays (0–3.0 seconds). These curves were embedded in human brain images. The data were analyzed by using 13 algorithms each for CT and magnetic resonance (MR), including five commercial vendors and five academic programs. Accuracy was assessed by using the Pearson correlation coefficient (r) for true values. Delay-, MTT-, or CBV-dependent errors and correlations between time to maximum of residue function (Tmax) were also evaluated. Results: In CT, CBV was generally well reproduced (r > 0.9 in 12 algorithms), but not CBF and MTT (r > 0.9 in seven and four algorithms, respectively). In MR, good correlation (r > 0.9) was observed in one-half of commercial programs, while all academic algorithms showed good correlations for all parameters. Most algorithms had delay-dependent errors, especially for commercial software, as well as CBV dependency for CBF or MTT calculation and MTT dependency for CBV calculation. Correlation was good in Tmax except for one algorithm. Conclusion: The digital phantom readily evaluated the accuracy and characteristics of the CT and MR perfusion analysis software. All commercial programs had delay-induced errors and/or insufficient correlations with true values, while academic programs for MR showed good correlations with true values. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112618/-/DC1 PMID:23220899
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
Srivastava, A; Koul, V; Dwivedi, S N; Upadhyaya, A D; Ahuja, A; Saxena, R
2015-08-01
The aim of this study was to evaluate the performance of the newly developed handheld hemoglobinmeter (TrueHb) by comparing its performance against and an automated five-part hematology analyzer, Sysmex counter XT 1800i (Sysmex). Two hundred venous blood samples were subjected through their total hemoglobin evaluation on each device three times. The average of the three readings on each device was considered as their respective device values, that is, TrueHb values and Sysmex values. The two set of values were comparatively analyzed. The repeatability of the performance of TrueHb was also evaluated against Sysmex values. The scatter plot of TrueHb values and Sysmex values showed linear distribution with positive correlations (r = 0.99). The intraclass correlation (ICC) values between the two set of values was found to be 0.995. Regression coefficients through origin, β, was found to be 0.995, with 95% confidence intervals (CI) ranging between 0.9900 and 1.0000. The mean difference in Bland-Altman plots of TrueHb values against the Sysmex values was found to be -0.02, with limits of agreement between -0.777 and 0.732 g/dL. Statistical analysis suggested good repeatability in results of TrueHb, having a low mean CV of 2.22, against 4.44, that of Sysmex values, and 95% confidence interval of 1.99-2.44, against 3.85-5.03, that of Sysmex values. These results suggested a strong positive correlation between the two measurements devices. It is thus concluded that TrueHb is a good point-of-care testing tool for estimating hemoglobin. © 2014 John Wiley & Sons Ltd.
Estimation of saturated pixel values in digital color imaging
Zhang, Xuemei; Brainard, David H.
2007-01-01
Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065
Scene-based nonuniformity correction with video sequences and registration.
Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B
2000-03-10
We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.
NASA Astrophysics Data System (ADS)
Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.
2011-12-01
As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
NASA Astrophysics Data System (ADS)
Liu, Q.
2011-09-01
At first, research advances on radiation transfer modeling on multi-scale remote sensing data are presented: after a general overview of remote sensing radiation transfer modeling, several recent research advances are presented, including leaf spectrum model (dPROS-PECT), vegetation canopy BRDF models, directional thermal infrared emission models(TRGM, SLEC), rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed. The land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation etc. are taken as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is designed and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China will be introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
NASA Astrophysics Data System (ADS)
Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav
2004-08-01
Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.
Critical laboratory values in hemostasis: toward consensus.
Lippi, Giuseppe; Adcock, Dorothy; Simundic, Ana-Maria; Tripodi, Armando; Favaloro, Emmanuel J
2017-09-01
The term "critical values" can be defined to entail laboratory test results that significantly lie outside the normal (reference) range and necessitate immediate reporting to safeguard patient health, as well as those displaying a highly and clinically significant variation compared to previous data. The identification and effective communication of "highly pathological" values has engaged the minds of many clinicians, health care and laboratory professionals for decades, since these activities are vital to good laboratory practice. This is especially true in hemostasis, where a timely and efficient communication of critical values strongly impacts patient management. Due to the heterogeneity of available data, this paper is hence aimed to analyze the state of the art and provide an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis, thus providing a basic document for future consultation that assists laboratory professionals and clinicians alike. KEY MESSAGES Critical values are laboratory test results significantly lying outside the normal (reference) range and necessitating immediate reporting to safeguard patient health. A broad heterogeneity exists about critical values in hemostasis worldwide. We provide here an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis.
Bright, Leah; Secko, Michael; Mehta, Ninfa; Paladino, Lorenzo; Sinert, Richard
2014-01-01
Background: Ultrasound is a readily available, non-invasive technique to visualize airway dimensions at the patient's bedside and possibly predict difficult airways before invasively looking; however, it has rarely been used for emergency investigation of the larynx. There is limited literature on the sonographic measurements of true vocal cords in adults and normal parameters must be established before abnormal parameters can be accurately identified. Objectives: The primary objective of the following study is to identify the normal sonographic values of human true vocal cords in an adult population. A secondary objective is to determine if there is a difference in true vocal cord measurements in people with different body mass indices (BMIs). The third objective was to determine if there was a statistical difference in the measurements for both genders. Materials and Methods: True vocal cord measurements were obtained in healthy volunteers by ultrasound fellowship trained emergency medicine physicians using a high frequency linear transducer orientated transversely across the anterior surface of the neck at the level of the thyroid cartilage. The width of the true vocal cord was measured perpendicularly to the length of the cord at its mid-portion. This method was duplicated from a previous study to create a standard of measurement acquisition. Results: A total of 38 subjects were enrolled. The study demonstrated no correlation between vocal cord measurements and patient's characteristics of height, weight, or BMI's. When accounting for vocal cord measurements by gender, males had larger BMI's and larger vocal cord measurements compared with females subjects with a statistically significant different in right vocal cord measurements for females compared with male subjects. Conclusion: No correlation was seen between vocal cord measurements and person's BMIs. In the study group of normal volunteers, there was a difference in size between the male and female vocal cord size. PMID:24812456
Bright, Leah; Secko, Michael; Mehta, Ninfa; Paladino, Lorenzo; Sinert, Richard
2014-04-01
Ultrasound is a readily available, non-invasive technique to visualize airway dimensions at the patient's bedside and possibly predict difficult airways before invasively looking; however, it has rarely been used for emergency investigation of the larynx. There is limited literature on the sonographic measurements of true vocal cords in adults and normal parameters must be established before abnormal parameters can be accurately identified. The primary objective of the following study is to identify the normal sonographic values of human true vocal cords in an adult population. A secondary objective is to determine if there is a difference in true vocal cord measurements in people with different body mass indices (BMIs). The third objective was to determine if there was a statistical difference in the measurements for both genders. True vocal cord measurements were obtained in healthy volunteers by ultrasound fellowship trained emergency medicine physicians using a high frequency linear transducer orientated transversely across the anterior surface of the neck at the level of the thyroid cartilage. The width of the true vocal cord was measured perpendicularly to the length of the cord at its mid-portion. This method was duplicated from a previous study to create a standard of measurement acquisition. A total of 38 subjects were enrolled. The study demonstrated no correlation between vocal cord measurements and patient's characteristics of height, weight, or BMI's. When accounting for vocal cord measurements by gender, males had larger BMI's and larger vocal cord measurements compared with females subjects with a statistically significant different in right vocal cord measurements for females compared with male subjects. No correlation was seen between vocal cord measurements and person's BMIs. In the study group of normal volunteers, there was a difference in size between the male and female vocal cord size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Kurt Vedros; Robert Youngblood
This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Estimating the proportion of true null hypotheses when the statistics are discrete.
Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S
2015-07-15
In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Nouwen, Rick; van Rooij, Robert; Sauerland, Uli; Schmitz, Hans-Christian
One could define vagueness as the existence of borderline cases and characterise the philosophical debate on vagueness as being about the nature of these. The prevalent theories of vagueness can be divided into three categories, paralleling three logical interpretations of borderline cases: (i) a borderline case is a case of a truth-value gap; it is neither true nor false; (ii) a borderline case is a case of a truth-value glut; it is both true and false; and (iii) a borderline case is a case where the truth-value is non-classical. The third of these is proposed in the fuzzy logic approach to vagueness. Three-valued approaches have only 1/2 as a value in addition to the standard values 1 and 0. These approaches can be interpreted either as allowing for gaps or gluts, depending on how the notion of satisfaction or truth is defined. If a sentence is taken to be true only if its value is 1, it allows for gaps, but if it is taken to be true already if its value is at least 1/2 it allows for gluts. The most popular theories advertising gluts and gaps, however, are supervaluationism and subvaluationism, both of which make use of the notion of precisifications, that is, ways of making things precise. Truth-value gaps in supervaluationism are due to the way truth simpliciter, or supertruth, is defined: A proposition is supertrue (superfalse) if it is true (false) at all precisifications. This means that a proposition can be neither true nor false in case there exist two precisifications, one of which make it true and one of which makes it false. Conversely, in subvaluation theory, the same scenario would lead to a truth-value glut. That is, the proposition would be both true and false. This is because subvaluationism defines truth simpliciter as being true at some precisifcation.
Aznar-Oroval, Eduardo; Sánchez-Yepes, Marina; Lorente-Alegre, Pablo; San Juan-Gadea, Mari Carmen; Ortiz-Muñoz, Blanca; Pérez-Ballestero, Pilar; Picón-Roig, Isabel; Maíquez-Richart, Joaquín
2010-05-01
Bacteremia is one of the most important causes of morbidity and mortality in cancer patients. The aim of this study was to evaluate the diagnostic usefulness of procalcitonin (PCT), interleukin 8 (IL-8), interleukin 6 (IL-6), and C-reactive protein (CRP) in the detection of bacteremia in cancer patients. PCT, IL-8, IL-6, and CPR levels were measured in 2 groups of cancer patients who had fever: one group with true bacteremia and another without bacteremia. Seventy-nine febrile episodes were analyzed in 79 patients, 43 men and 36 women. Forty-four patients were in the true bacteremia group. Significant differences in PCT (P<0.001), IL-8 (P<0.001), and IL-6 (P=0.002) values were found between patients with and without true bacteremia. CPR results were not significantly different between the groups (P=0.23). The cut-off point for PCT was 0.5 ng/mL and this parameter yielded the best specificity at 91.4%, with a sensitivity of 59.1%. Among the infection markers studied, PCT provided the most information for diagnosing bacteremia in cancer patients. (c) 2009 Elsevier España, S.L. All rights reserved.
Galias, Zbigniew
2017-05-01
An efficient method to find positions of periodic windows for the quadratic map f(x)=ax(1-x) and a heuristic algorithm to locate the majority of wide periodic windows are proposed. Accurate rigorous bounds of positions of all periodic windows with periods below 37 and the majority of wide periodic windows with longer periods are found. Based on these results, we prove that the measure of the set of regular parameters in the interval [3,4] is above 0.613960137. The properties of periodic windows are studied numerically. The results of the analysis are used to estimate that the true value of the measure of the set of regular parameters is close to 0.6139603.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardiansyah, Deni
2016-09-15
Purpose: The aim of this study was to investigate the accuracy of PET-based treatment planning for predicting the time-integrated activity coefficients (TIACs). Methods: The parameters of a physiologically based pharmacokinetic (PBPK) model were fitted to the biokinetic data of 15 patients to derive assumed true parameters and were used to construct true mathematical patient phantoms (MPPs). Biokinetics of 150 MBq {sup 68}Ga-DOTATATE-PET was simulated with different noise levels [fractional standard deviation (FSD) 10%, 1%, 0.1%, and 0.01%], and seven combinations of measurements at 30 min, 1 h, and 4 h p.i. PBPK model parameters were fitted to the simulated noisymore » PET data using population-based Bayesian parameters to construct predicted MPPs. Therapy simulations were performed as 30 min infusion of {sup 90}Y-DOTATATE of 3.3 GBq in both true and predicted MPPs. Prediction accuracy was then calculated as relative variability v{sub organ} between TIACs from both MPPs. Results: Large variability values of one time-point protocols [e.g., FSD = 1%, 240 min p.i., v{sub kidneys} = (9 ± 6)%, and v{sub tumor} = (27 ± 26)%] show inaccurate prediction. Accurate TIAC prediction of the kidneys was obtained for the case of two measurements (1 and 4 h p.i.), e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 10)%, or three measurements, e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 9)%. Conclusions: {sup 68}Ga-DOTATATE-PET measurements could possibly be used to predict the TIACs of {sup 90}Y-DOTATATE when using a PBPK model and population-based Bayesian parameters. The two time-point measurement at 1 and 4 h p.i. with a noise up to FSD = 1% allows an accurate prediction of the TIACs in kidneys.« less
User's Manual for Aerofcn: a FORTRAN Program to Compute Aerodynamic Parameters
NASA Technical Reports Server (NTRS)
Conley, Joseph L.
1992-01-01
The computer program AeroFcn is discussed. AeroFcn is a utility program that computes the following aerodynamic parameters: geopotential altitude, Mach number, true velocity, dynamic pressure, calibrated airspeed, equivalent airspeed, impact pressure, total pressure, total temperature, Reynolds number, speed of sound, static density, static pressure, static temperature, coefficient of dynamic viscosity, kinematic viscosity, geometric altitude, and specific energy for a standard- or a modified standard-day atmosphere using compressible flow and normal shock relations. Any two parameters that define a unique flight condition are selected, and their values are entered interactively. The remaining parameters are computed, and the solutions are stored in an output file. Multiple cases can be run, and the multiple case solutions can be stored in another output file for plotting. Parameter units, the output format, and primary constants in the atmospheric and aerodynamic equations can also be changed.
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Latif, Vishnu Ben; Keshavaraj; Rai, Rohan; Hegde, Gautham; Shajahan, Shabna
2015-01-01
Background: The aim of this study was to verify the intra-individual reproducibility of natural head position (NHP) in centric relation (CR) position, to prove the inter-individual differences in the Frankfort horizontal plane and sella-nasion line compared with the true horizontal line, and to establish linear norms from A-point, B-point, Pog as well as soft tissue A-point, soft tissue B-point, and soft tissue Pog to nasion true vertical line (NTVL) in adult Indian subjects. Methods: Lateral cephalograms (T1) of Angle’s Class I subjects were taken in NHP and with bite in CR. A second lateral cephalogram (T2) of these subjects with ANB angle in the range 1-4° were taken after 1 week using the same wax bite and both the radiographs were analyzed based on six angular parameters using cephalometric software (Do-it, Dental studio NX version 4.1) to assess the reproducibility of NHP. Linear values of six landmarks were taken in relation to NTVL, and the mean values were calculated. A total of 116 subjects were included in this study. Results: When the cephalometric values of T1 and T2 were analyzed, it was found that, the parameters showed a P < 0.001, indicating the reproducibility of NHP in CR. Mean values for point A, point B, Pog and their soft tissue counterparts were also obtained. Conclusion: The study proved that NHP is a reproducible and accurate when recorded with the mandible in CR. Linear norms for skeletal Class I subjects in relation to NTVL were established. PMID:26124598
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altundal, Y; Pokhrel, D; Jiang, H
Purpose: To compare image quality parameters and assessing the image stability of three different linear accelerators (linac) for 2D and 3D imaging modalities: planar kV, MV images and cone-beam CT (CBCT). Methods: QCkV1, QC-3 and Cathpan-600 phantoms were utilized to acquire kV, MV and CBCT images respectively on monthly basis per TG142 QA protocol for over 2 years on 21Ex, NovalisTx and TrueBeam linacs. DICOM images were analyzed with the help of QA analysis software: PIPsPro from Standard Imaging. For planar kV and MV images, planar spatial resolution, contrast to noise ratio (CNR) and noise; for CBCT, HU values weremore » collected and analyzed. Results: Two years of monthly QA measurements were analyzed for the planar and CBCT images. Values were normalized to the mean and the standard deviations (STD) are presented. For the kV planar radiographic images the STD of spatial resolution for f30, f40, f50, CNR and noise for 21Ex are 0.006, 0.011, 0.013, 0.046, 0.026; Novalis-Tx are 0.009, 0.016, 0.016, 0.067, 0.053 ; TrueBeam are 0.007, 0.005, 0.009, 0.017, 0.016 respectively. For the MV planar radiographic images, the STD of spatial resolution for f30, f40, f50, CNR and noise for 21Ex are 0.009, 0.010, 0.008, 0.023, 0.023; for Novalix-Tx are 0.012, 0.010, 0.008, 0.029, 0.023 and for TrueBeam are 0.010, 0.010, 0.007, 0.022, 0.022 respectively. For the CBCT images, HU constancies of Air, Polystyrene, Teflon, PMP, LDPE and Delrin for 21Ex are 0.014, 0.070, 0.031, 0.053, 0.076, 0.087; for Novalis Tx are 0.019, 0.047, 0.035, 0.059, 0.077, 0.087 and for TrueBeam are 0.011, 0.044, 0.025, 0.044, 0.056, 0.020 respectively. Conclusion: These Imaging QA results demonstrated that the TrueBeam, performed better in terms of image quality stability for both kV planer and CBCT images as well as EPID MV images, however other two linacs were also satisfied TG142 guidelines.« less
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
Analysing the 21 cm signal from the epoch of reionization with artificial neural networks
NASA Astrophysics Data System (ADS)
Shimabukuro, Hayato; Semelin, Benoit
2017-07-01
The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.
Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco
2016-05-01
The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo
Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less
NASA Astrophysics Data System (ADS)
Reginald, Nelson; St. Cyr, Orville; Davila, Joseph; Rastaetter, Lutz; Török, Tibor
2018-05-01
Obtaining reliable measurements of plasma parameters in the Sun's corona remains an important challenge for solar physics. We previously presented a method for producing maps of electron temperature and speed of the solar corona using K-corona brightness measurements made through four color filters in visible light, which were tested for their accuracies using models of a structured, yet steady corona. In this article we test the same technique using a coronal model of the Bastille Day (14 July 2000) coronal mass ejection, which also contains quiet areas and streamers. We use the coronal electron density, temperature, and flow speed contained in the model to determine two K-coronal brightness ratios at (410.3, 390.0 nm) and (423.3, 398.7 nm) along more than 4000 lines of sight. Now assuming that for real observations, the only information we have for each line of sight are these two K-coronal brightness ratios, we use a spherically symmetric model of the corona that contains no structures to interpret these two ratios for electron temperature and speed. We then compare the interpreted (or measured) values for each line of sight with the true values from the model at the plane of the sky for that same line of sight to determine the magnitude of the errors. We show that the measured values closely match the true values in quiet areas. However, in locations of coronal structures, the measured values are predictably underestimated or overestimated compared to the true values, but can nevertheless be used to determine the positions of the structures with respect to the plane of the sky, in front or behind. Based on our results, we propose that future white-light coronagraphs be equipped to image the corona using four color filters in order to routinely create coronal maps of electron density, temperature, and flow speed.
NASA Astrophysics Data System (ADS)
Escamilla-Roa, J.; Latimer, D. C.; Ernst, D. J.
2010-01-01
A three-neutrino analysis of oscillation data is performed using the recent, more finely binned Super-K oscillation data, together with the CHOOZ, K2K, and MINOS data. The solar parameters Δ21 and θ12 are fixed from a recent analysis and Δ32, θ13, and θ23 are varied. We utilize the full three-neutrino oscillation probability and an exact treatment of Earth’s Mikheyev-Smirnov-Wolfenstein (MSW) effect with a castle-wall density. By including terms linear in θ13 and ɛ:=θ23-π/4, we find asymmetric errors for these parameters θ13=-0.07-0.11+0.18 and ɛ=0.03-0.15+0.09. For θ13, we see that the lower bound is primarily set by the CHOOZ experiment while the upper bound is determined by the low energy e-like events in the Super-K atmospheric data. We find that the parameters θ13 and ɛ are correlated—the preferred negative value of θ13 permits the preferred value of θ23 to be in the second octant, and the true value of θ13 affects the allowed region for θ23.
Redshift Space Distortion on the Small Scale Clustering of Structure
NASA Astrophysics Data System (ADS)
Park, Hyunbae; Sabiu, Cristiano; Li, Xiao-dong; Park, Changbom; Kim, Juhan
2018-01-01
The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. The shape of the two-point correlation of galaxies exhibits a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. In our previous works, we can made use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This current work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities. We now aim to understand the redshift evolution of the full shape of the small scale, anisotropic galaxy clustering and give a firmer theoretical footing to our previous works.
Zooming in on neutrino oscillations with DUNE
NASA Astrophysics Data System (ADS)
Srivastava, Rahul; Ternes, Christoph A.; Tórtola, Mariam; Valle, José W. F.
2018-05-01
We examine the capabilities of the DUNE experiment as a probe of the neutrino mixing paradigm. Taking the current status of neutrino oscillations and the design specifications of DUNE, we determine the experiment's potential to probe the structure of neutrino mixing and C P violation. We focus on the poorly determined parameters θ23 and δC P and consider both two and seven years of run. We take various benchmarks as our true values, such as the current preferred values of θ23 and δC P, as well as several theory-motivated choices. We determine quantitatively DUNE's potential to perform a precision measurement of θ23, as well as to test the C P violation hypothesis in a model-independent way. We find that, after running for seven years, DUNE will make a substantial step in the precise determination of these parameters, bringing to quantitative test the predictions of various theories of neutrino mixing.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S; Fan, Q; Lei, Y
Purpose: In-Water-Output-Ratio (IWOR) plays a significant role in linac-based radiotherapy treatment planning, linking MUs to delivered radiation dose. For an open rectangular field, IWOR depends on both its width and length, and changes rapidly when one of them becomes small. In this study, a universal functional form is proposed to fit the open field IWOR tables in Varian TrueBeam representative datasets for all photon energies. Methods: A novel Generalized Mean formula is first used to estimate the Equivalent Square (ES) for a rectangular field. The formula’s weighting factor and power index are determined by collapsing all data points as muchmore » as possible onto a single curve in IWOR vs. ES plot. The result is then fitted with a novel universal function IWOR=1+b*Log(ES/10cm)/(ES/10cm)^c via a least-square procedure to determine the optimal values for parameters b and c. The maximum relative residual error in IWOR over the entire two-dimensional measurement table with field sizes between 3cm and 40cm is used to evaluate the quality of fit for the function. Results: The two-step fitting strategy works very well in determining the optimal parameter values for open field IWOR of each photon energies in the Varian data-set. Relative residual error ≤0.71% is achieved for all photon energies (including Flattening-Filter-Free modes) with field sizes between 3cm and 40cm. The optimal parameter values change smoothly with regular photon beam quality. Conclusion: The universal functional form fits the Varian TrueBeam open field IWOR measurement tables accurately with small relative residual errors for all photon energies. Therefore, it can be an excellent choice to represent IWOR in absolute dose and MU calculations. The functional form can also be used as a QA/commissioning tool to verify the measured data quality and consistency by checking the IWOR data behavior against the function for new photon energies with arbitrary beam quality.« less
NASA Astrophysics Data System (ADS)
Panteleev, A. A.; Bobinkin, V. V.; Larionov, S. Yu.; Ryabchikov, B. E.; Smirnov, V. B.; Shapovalov, D. A.
2017-10-01
When designing large-scale water-treatment plants based on reverse-osmosis systems, it is proposed to conduct experimental-industrial or pilot tests for validated simulation of the operation of the equipment. It is shown that such tests allow establishing efficient operating conditions and characteristics of the plant under design. It is proposed to conduct pilot tests of the reverse-osmosis systems on pilot membrane plants (PMPs) and test membrane plants (TMPs). The results of a comparative experimental study of pilot and test membrane plants are exemplified by simulating the operating parameters of the membrane elements of an industrial plant. It is concluded that the reliability of the data obtained on the TMP may not be sufficient to design industrial water-treatment plants, while the PMPs are capable of providing reliable data that can be used for full-scale simulation of the operation of industrial reverse-osmosis systems. The test membrane plants allow simulation of the operating conditions of individual industrial plant systems; therefore, potential areas of their application are shown. A method for numerical calculation and experimental determination of the true selectivity and the salt passage are proposed. An expression has been derived that describes the functional dependence between the observed and true salt passage. The results of the experiments conducted on a test membrane plant to determine the true value of the salt passage of a reverse-osmosis membrane are exemplified by magnesium sulfate solution at different initial operating parameters. It is shown that the initial content of a particular solution component has a significant effect on the change in the true salt passage of the membrane.
Validation of Leaf Area Index measurements based on the Wireless Sensor Network platform
NASA Astrophysics Data System (ADS)
Song, Q.; Li, X.; Liu, Q.
2017-12-01
The leaf area index (LAI) is one of the important parameters for estimating plant canopy function, which has significance for agricultural analysis such as crop yield estimation and disease evaluation. The quick and accurate access to acquire crop LAI is particularly vital. In the study, LAI measurement of corn crops is mainly through three kinds of methods: the leaf length and width method (LAILLW), the instruments indirect measurement method (LAII) and the leaf area index sensor method(LAIS). Among them, LAI value obtained from LAILLW can be regarded as approximate true value. LAI-2200,the current widespread LAI canopy analyzer,is used in LAII. LAIS based on wireless sensor network can realize the automatic acquisition of crop images,simplifying the data collection work,while the other two methods need person to carry out field measurements.Through the comparison of LAIS and other two methods, the validity and reliability of LAIS observation system is verified. It is found that LAI trend changes are similar in three methods, and the rate of change of LAI has an increase with time in the first two months of corn growth when LAIS costs less manpower, energy and time. LAI derived from LAIS is more accurate than LAII in the early growth stage,due to the small blade especially under the strong light. Besides, LAI processed from a false color image with near infrared information is much closer to the true value than true color picture after the corn growth period up to one and half months.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Alsafi, Huseen; Peninngton, Gray
Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
Use of uterine electromyography to diagnose term and preterm labor
LUCOVNIK, MIHA; KUON, RUBEN J.; CHAMBLISS, LINDA R.; MANER, WILLIAM L.; SHI, SHAO-QING; SHI, LEILI; BALDUCCI, JAMES; GARFIELD, ROBERT E.
2011-01-01
Current methodologies to assess the process of labor, such as tocodynamometry or intrauterine pressure catheters, fetal fibronectin, cervical length measurement and digital cervical examination, have several major drawbacks. They only measure the onset of labor indirectly and do not detect cellular changes characteristic of true labor. Consequently, their predictive values for term or preterm delivery are poor. Uterine contractions are a result of the electrical activity within the myometrium. Measurement of uterine electromyography (EMG) has been shown to detect contractions as accurately as the currently used methods. In addition, changes in cell excitability and coupling required for effective contractions that lead to delivery are reflected in changes of several EMG parameters. Use of uterine EMG can help to identify patients in true labor better than any other method presently employed in the clinic. PMID:21241260
Boundary effects and the onset of Taylor vortices
NASA Astrophysics Data System (ADS)
Rucklidge, A. M.; Champneys, A. R.
2004-05-01
It is well established that the onset of spatially periodic vortex states in the Taylor-Couette flow between rotating cylinders occurs at the value of Reynolds number predicted by local bifurcation theory. However, the symmetry breaking induced by the top and bottom plates means that the true situation should be a disconnected pitchfork. Indeed, experiments have shown that the fold on the disconnected branch can occur at more than double the Reynolds number of onset. This leads to an apparent contradiction: why should Taylor vortices set in so sharply at the Reynolds number predicted by the symmetric theory, given such large symmetry-breaking effects caused by the boundary conditions? This paper offers a generic explanation. The details are worked out using a Swift-Hohenberg pattern formation model that shares the same qualitative features as the Taylor-Couette flow. Onset occurs via a wall mode whose exponential tail penetrates further into the bulk of the domain as the driving parameter increases. In a large domain of length L, we show that the wall mode creates significant amplitude in the centre at parameter values that are O( L-2) away from the value of onset in the problem with ideal boundary conditions. We explain this as being due to a Hamiltonian Hopf bifurcation in space, which occurs at the same parameter value as the pitchfork bifurcation of the temporal dynamics. The disconnected anomalous branch remains O(1) away from the onset parameter since it does not arise as a bifurcation from the wall mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiCostanzo, D; Ayan, A; Woollard, J
Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate viamore » a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.« less
NASA Astrophysics Data System (ADS)
Khan, Masood; Sardar, Humara
2018-03-01
This paper investigates the steady two-dimensional flow over a moving/static wedge in a Carreau viscosity model with infinite shear rate viscosity. Additionally, heat transfer analysis is performed. Using suitable transformations, nonlinear partial differential equations are transformed into ordinary differential equations and solved numerically using the Runge-Kutta Fehlberg method coupled with the shooting technique. The effects of various physical parameters on the velocity and temperature distributions are displayed graphically and discussed qualitatively. A comparison with the earlier reported results has been made with an excellent agreement. It is important to note that the increasing values of the wedge angle parameter enhance the fluid velocity while the opposite trend is observed for the temperature field for both shear thinning and thickening fluids. Generally, our results reveal that the velocity and temperature distributions are marginally influenced by the viscosity ratio parameter. Further, it is noted that augmented values of viscosity ratio parameter thin the momentum and thermal boundary layer thickness in shear thickening fluid and reverse is true for shear thinning fluid. Moreover, it is noticed that the velocity in case of moving wedge is higher than static wedge.
Wang, S; Martinez-Lage, M; Sakai, Y; Chawla, S; Kim, S G; Alonso-Basanta, M; Lustig, R A; Brem, S; Mohan, S; Wolf, R L; Desai, A; Poptani, H
2016-01-01
Early assessment of treatment response is critical in patients with glioblastomas. A combination of DTI and DSC perfusion imaging parameters was evaluated to distinguish glioblastomas with true progression from mixed response and pseudoprogression. Forty-one patients with glioblastomas exhibiting enhancing lesions within 6 months after completion of chemoradiation therapy were retrospectively studied. All patients underwent surgery after MR imaging and were histologically classified as having true progression (>75% tumor), mixed response (25%-75% tumor), or pseudoprogression (<25% tumor). Mean diffusivity, fractional anisotropy, linear anisotropy coefficient, planar anisotropy coefficient, spheric anisotropy coefficient, and maximum relative cerebral blood volume values were measured from the enhancing tissue. A multivariate logistic regression analysis was used to determine the best model for classification of true progression from mixed response or pseudoprogression. Significantly elevated maximum relative cerebral blood volume, fractional anisotropy, linear anisotropy coefficient, and planar anisotropy coefficient and decreased spheric anisotropy coefficient were observed in true progression compared with pseudoprogression (P < .05). There were also significant differences in maximum relative cerebral blood volume, fractional anisotropy, planar anisotropy coefficient, and spheric anisotropy coefficient measurements between mixed response and true progression groups. The best model to distinguish true progression from non-true progression (pseudoprogression and mixed) consisted of fractional anisotropy, linear anisotropy coefficient, and maximum relative cerebral blood volume, resulting in an area under the curve of 0.905. This model also differentiated true progression from mixed response with an area under the curve of 0.901. A combination of fractional anisotropy and maximum relative cerebral blood volume differentiated pseudoprogression from nonpseudoprogression (true progression and mixed) with an area under the curve of 0.807. DTI and DSC perfusion imaging can improve accuracy in assessing treatment response and may aid in individualized treatment of patients with glioblastomas. © 2016 by American Journal of Neuroradiology.
Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja
2014-01-01
Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Schmidt, Barbara; Roberts, Robin S; Whyte, Robin K; Asztalos, Elizabeth V; Poets, Christian; Rabi, Yacov; Solimano, Alfonso; Nelson, Harvey
2014-10-01
To compare oxygen saturations as displayed to caregivers on offset pulse oximeters in the 2 groups of the Canadian Oxygen Trial. In 5 double-blind randomized trials of oxygen saturation targeting, displayed saturations between 88% and 92% were offset by 3% above or below the true values but returned to true values below 84% and above 96%. During the transition, displayed values remained static at 96% in the lower and at 84% in the higher target group during a 3% change in true saturations. In contrast, displayed values changed rapidly from 88% to 84% in the lower and from 92% to 96% in the higher target group during a 1% change in true saturations. We plotted the distributions of median displayed saturations on days with >12 hours of supplemental oxygen in 1075 Canadian Oxygen Trial participants to reconstruct what caregivers observed at the bedside. The oximeter masking algorithm was associated with an increase in both stability and instability of displayed saturations that occurred during the transition between offset and true displayed values at opposite ends of the 2 target ranges. Caregivers maintained saturations at lower displayed values in the higher than in the lower target group. This differential management reduced the separation between the median true saturations in the 2 groups by approximately 3.5%. The design of the oximeter masking algorithm may have contributed to the smaller-than-expected separation between true saturations in the 2 study groups of recent saturation targeting trials in extremely preterm infants. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sagasta, Francisco; Zitto, Miguel E.; Piotrkowski, Rosa; Benavent-Climent, Amadeo; Suarez, Elisabet; Gallego, Antolino
2018-03-01
A modification of the original b-value (Gutenberg-Richter parameter) is proposed to evaluate local damage of reinforced concrete structures subjected to dynamical loads via the acoustic emission (AE) method. The modification, shortly called energy b-value, is based on the use of the true energy of the AE signals instead of its peak amplitude, traditionally used for the calculation of b-value. The proposal is physically supported by the strong correlation between the plastic strain energy dissipated by the specimen and the true energy of the AE signals released during its deformation and cracking process, previously demonstrated by the authors in several publications. AE data analysis consisted in the use of guard sensors and the Continuous Wavelet Transform in order to separate primary and secondary emissions as much as possible according to particular frequency bands. The approach has been experimentally applied to the AE signals coming from a scaled reinforced concrete frame structure, which was subjected to sequential seismic loads of incremental acceleration peak by means of a 3 × 3 m2 shaking table. For this specimen two beam-column connections-one exterior and one interior-were instrumented with wide band low frequency sensors properly attached on the structure. Evolution of the energy b-value along the loading process accompanies the evolution of the severe damage at the critical regions of the structure (beam-column connections), thus making promising its use for structural health monitoring purposes.
Regulation of NF-κB oscillation by spatial parameters in true intracellular space (TiCS)
NASA Astrophysics Data System (ADS)
Ohshima, Daisuke; Sagara, Hiroshi; Ichikawa, Kazuhisa
2013-10-01
Transcription factor NF-κB is activated by cytokine stimulation, viral infection, or hypoxic environment leading to its translocation to the nucleus. The nuclear NF-κB is exported from the nucleus to the cytoplasm again, and by repetitive import and export, NF-κB shows damped oscillation with the period of 1.5-2.0 h. Oscillation pattern of NF-κB is thought to determine the gene expression profile. We published a report on a computational simulation for the oscillation of nuclear NF-κB in a 3D spherical cell, and showed the importance of spatial parameters such as diffusion coefficient and locus of translation for determining the oscillation pattern. Although the value of diffusion coefficient is inherent to protein species, its effective value can be modified by organelle crowding in intracellular space. Here we tested this possibility by computer simulation. The results indicate that the effective value of diffusion coefficient is significantly changed by the organelle crowding, and this alters the oscillation pattern of nuclear NF-κB.
NASA Astrophysics Data System (ADS)
Zorec, J.; Frémat, Y.; Domiciano de Souza, A.; Royer, F.; Cidale, L.; Hubert, A.-M.; Semaan, T.; Martayan, C.; Cochetti, Y. R.; Arias, M. L.; Aidelman, Y.; Stee, P.
2016-11-01
Context. Among intermediate-mass and massive stars, Be stars are the fastest rotators in the main sequence (MS) and, as such, these stars are a cornerstone to validate models of structure and evolution of rotating stars. Several phenomena, however, induce under- or overestimations either of their apparent Vsini, or true velocity V. Aims: In the present contribution we aim at obtaining distributions of true rotational velocities corrected for systematic effects induced by the rapid rotation itself, macroturbulent velocities, and binarity. Methods: We study a set of 233 Be stars by assuming they have inclination angles distributed at random. We critically discuss the methods of Cranmer and Lucy-Richardson, which enable us to transform a distribution of projected velocities into another distribution of true rotational velocities, where the gravitational darkening effect on the Vsini parameter is considered in different ways. We conclude that iterative algorithm by Lucy-Richardson responds at best to the purposes of the present work, but it requires a thorough determination of the stellar fundamental parameters. Results: We conclude that once the mode of ratios of the true velocities of Be stars attains the value V/Vc ≃ 0.77 in the main-sequence (MS) evolutionary phase, it remains unchanged up to the end of the MS lifespan. The statistical corrections found on the distribution of ratios V/Vc for overestimations of Vsini, due to macroturbulent motions and binarity, produce a shift of this distribution toward lower values of V/Vc when Be stars in all MS evolutionary stages are considered together. The mode of the final distribution obtained is at V/Vc ≃ 0.65. This distribution has a nearly symmetric distribution and shows that the Be phenomenon is characterized by a wide range of true velocity ratios 0.3 ≲ V/Vc ≲ 0.95. It thus suggests that the probability that Be stars are critical rotators is extremely low. Conclusions: The corrections attempted in the present work represent an initial step to infer indications about the nature of the Be-star surface rotation that will be studied in the second paper of this series. Full Tables 1 and 4 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/A132
NASA Astrophysics Data System (ADS)
Joseph, Abhilash J.; Kumar, Binay
2018-03-01
The conventionally reported value of remanent polarization (Pr) contains contribution from non-remanent components which are not usable for memory device applications. This report presents techniques which extract the true-remanent (intrinsic) component of polarization after eliminating the non-remanent component in ferroelectric ceramics. For this, "remanent hysteresis task" and "positive-up-negative-down technique" were performed which utilized the switchable properties of polarizations to nullify the contributions from the non-remanent (non-switchable) components. The report also addresses the time-dependent leakage behavior of the ceramics focusing on the presence of resistive leakage (a time-dependent parameter) present in the ceramics. The techniques presented here are especially useful for polycrystalline ceramics where leakage current leads to an erroneous estimation of Pr.
SU-E-T-327: The Update of a XML Composing Tool for TrueBeam Developer Mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Mao, W; Jiang, S
2014-06-01
Purpose: To introduce a major upgrade of a novel XML beam composing tool to scientists and engineers who strive to translate certain capabilities of TrueBeam Developer Mode to future clinical benefits of radiation therapy. Methods: TrueBeam Developer Mode provides the users with a test bed for unconventional plans utilizing certain unique features not accessible at the clinical mode. To access the full set of capabilities, a XML beam definition file accommodating all parameters including kV/MV imaging triggers in the plan can be locally loaded at this mode, however it is difficult and laborious to compose one in a text editor.more » In this study, a stand-along interactive XML beam composing application, TrueBeam TeachMod, was developed on Windows platforms to assist users in making their unique plans in a WYSWYG manner. A conventional plan can be imported in a DICOM RT object as the start of the beam editing process in which trajectories of all axes of a TrueBeam machine can be modified to the intended values at any control point. TeachMod also includes libraries of predefined imaging and treatment procedures to further expedite the process. Results: The TeachMod application is a major of the TeachMod module within DICOManTX. It fully supports TrueBeam 2.0. Trajectories of all axes including all MLC leaves can be graphically rendered and edited as needed. The time for XML beam composing has been reduced to a negligible amount regardless the complexity of the plan. A good understanding of XML language and TrueBeam schema is not required though preferred. Conclusion: Creating XML beams manually in a text editor will be a lengthy error-prone process for sophisticated plans. A XML beam composing tool is highly desirable for R and D activities. It will bridge the gap between scopes of TrueBeam capabilities and their clinical application potentials.« less
On the Use of Topside RO-Derived Electron Density for Model Validation
NASA Astrophysics Data System (ADS)
Shaikh, M. M.; Nava, B.; Haralambous, H.
2018-05-01
In this work, the standard Abel inversion has been exploited as a powerful observation tool, which may be helpful to model the topside of the ionosphere and therefore to validate ionospheric models. A thorough investigation on the behavior of radio occultation (RO)-derived topside electron density (Ne(h))-profiles has therefore been performed with the main purpose to understand whether it is possible to predict the accuracy of a single RO-retrieved topside by comparing the peak density and height of the retrieved profile to the true values. As a first step, a simulation study based on the use of the NeQuick2 model has been performed to show that when the RO-derived electron density peak and height match the true peak values, the full topside Ne(h)-profile may be considered accurate. In order to validate this hypothesis with experimental data, electron density profiles obtained from four different incoherent scatter radars have therefore been considered together with co-located RO-derived Ne(h)-profiles. The evidence presented in this paper show that in all cases examined, if the incoherent scatter radar and the corresponding co-located RO profile have matching peak parameter values, their topsides are in very good agreement. The simulation results presented in this work also highlighted the importance of considering the occultation plane azimuth while inverting RO data to obtain Ne(h)-profile. In particular, they have indicated that there is a preferred range of azimuths of the occultation plane (80°-100°) for which the difference between the "true" and the RO-retrieved Ne(h)-profile in the topside is generally minimal.
40 CFR 92.102 - Definitions and abbreviations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.102...), also apply: Accuracy means the difference between the measured value and the true value, where the true... otherwise. Readability means the smallest difference in measured values that can be detected. For example...
40 CFR 92.102 - Definitions and abbreviations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.102...), also apply: Accuracy means the difference between the measured value and the true value, where the true... otherwise. Readability means the smallest difference in measured values that can be detected. For example...
40 CFR 92.102 - Definitions and abbreviations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.102...), also apply: Accuracy means the difference between the measured value and the true value, where the true... otherwise. Readability means the smallest difference in measured values that can be detected. For example...
40 CFR 92.102 - Definitions and abbreviations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.102...), also apply: Accuracy means the difference between the measured value and the true value, where the true... otherwise. Readability means the smallest difference in measured values that can be detected. For example...
Study of the true performance limits of the Astrometric Multiplexing Area Scanner (AMAS)
NASA Technical Reports Server (NTRS)
Frederick, L. W.; Mcalister, H. A.
1975-01-01
The Astrometric Multiplexing Area Scanner (AMAS) is an instrument designed to perform photoelectric long focus astrometry of small fields. Modulation of a telescope focal plane with a rotating Ronchi ruling produces a frequency modulated signal from which relative positions and magnitudes can be extracted. Evaluation instrumental precision, accuracy and resolution characteristics with respect to a variety of instrumental and cosmical parameters indicates 1.5 micron precision and accuracy for single stars under specific conditions. This value decreases for increased number of field stars, particularly for fainter stars.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.
Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik
2016-12-01
We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Dose dependence of true stress parameters in irradiated bcc, fcc, and hcp metals
NASA Astrophysics Data System (ADS)
Byun, T. S.
2007-04-01
The dose dependence of true stress parameters has been investigated for nuclear structural materials: A533B pressure vessel steels, modified 9Cr-1Mo and 9Cr-2WVTa ferritic martensitic steels, 316 and 316LN stainless steels, and Zircaloy-4. After irradiation to significant doses, these alloys show radiation-induced strengthening and often experience prompt necking at yield followed by large necking deformation. In the present work, the critical true stresses for deformation and fracture events, such as yield stress (YS), plastic instability stress (PIS), and true fracture stress (FS), were obtained from uniaxial tensile tests or calculated using a linear strain-hardening model for necking deformation. At low dose levels where no significant embrittlement was detected, the true fracture stress was nearly independent of dose. The plastic instability stress was also independent of dose before the critical dose-to-prompt-necking at yield was reached. A few bcc alloys such as ferritic martensitic steels experienced significant embrittlement at doses above ∼1 dpa; and the true fracture stress decreased with dose. The materials fractured before yield at or above 10 dpa.
Misawa, Noriko; Barbano, David M; Drake, MaryAnne
2016-07-01
Combinations of fresh liquid microfiltration retentate of skim milk, ultrafiltered retentate and permeate produced from microfiltration permeate, cream, and dried lactose monohydrate were used to produce a matrix of 20 milks. The milks contained 5 levels of casein as a percentage of true protein of about 5, 25, 50, 75, and 80% and 4 levels of true protein of 3.0, 3.76, 4.34, and 5.0% with constant lactose percentage of 5%. The experiment was replicated twice and repeated for both 1 and 2% fat content. Hunter color measurements, relative viscosity, and fat globule size distribution were measured, and a trained panel documented appearance and texture attributes on all milks. Overall, casein as a percentage of true protein had stronger effects than level of true protein on Hunter L, a, b values, relative viscosity, and fat globule size when using fresh liquid micellar casein concentrates and milk serum protein concentrates produced by a combination of microfiltration and ultrafiltration. As casein as a percentage of true protein increased, the milks became more white (higher L value), less green (lower negative a value), and less yellow (lower b value). Relative viscosity increased and d(0.9) generally decreased with increasing casein as a percentage of true protein. Panelists perceived milks with increasing casein as a percentage of true protein as more white, more opaque, and less yellow. Panelists were able to detect increased throat cling and mouthcoating with increased casein as a percentage of true protein in 2% milks, even when differences in appearance among milks were masked. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Kopans, Daniel B
2008-02-01
Numerous studies have suggested a link between breast tissue patterns, as defined with mammography, and risk for breast cancer. There may be a relationship, but the author believes all of these studies have methodological flaws. It is impossible, with the parameters used in these studies, to accurately measure the percentage of tissues by volume when two-dimensional x-ray mammographic images are used. Without exposure values, half-value layer information, and knowledge of the compressed thickness of the breast, an accurate volume of tissue cannot be calculated. The great variability in positioning the breast for a mammogram is also an uncontrollable factor in measuring tissue density. Computerized segmentation algorithms can accurately assess the percentage of the x-ray image that is "dense," but this does not accurately measure the true volume of tissue. Since the percentage of dense tissue is ultimately measured in relation to the complete volume of the breast, defining the true boundaries of the breast is also a problem. Studies that purport to show small percentage differences between groups are likely inaccurate. Future investigations need to use three-dimensional information. (c) RSNA, 2008.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Cosmological implications of the transition from the false vacuum to the true vacuum state
NASA Astrophysics Data System (ADS)
Stachowski, Aleksander; Szydłowski, Marek; Urbanowski, Krzysztof
2017-06-01
We study cosmology with running dark energy. The energy density of dark energy is obtained from the quantum process of transition from the false vacuum state to the true vacuum state. We use the Breit-Wigner energy distribution function to model the quantum unstable systems and obtain the energy density of the dark energy parametrization ρ _ {de}(t). We also use Krauss and Dent's idea linking properties of the quantum mechanical decay of unstable states with the properties of the observed Universe. In the cosmological model with this parametrization there is an energy transfer between dark matter and dark energy. The intensity of this process, measured by a parameter α , distinguishes two scenarios. As the Universe starts from the false vacuum state, for the small value of α (0<α <0.4) it goes through an intermediate oscillatory (quantum) regime of the density of dark energy, while for α > 0.4 the density of the dark energy jumps down. In both cases the present value of the density of dark energy is reached. From a statistical analysis we find this model to be in good agreement with the astronomical data and practically indistinguishable from the Λ CDM model.
Green, Christopher T.; Böhlke, John Karl; Bekins, Barbara A.; Phillips, Steven P.
2010-01-01
Gradients in contaminant concentrations and isotopic compositions commonly are used to derive reaction parameters for natural attenuation in aquifers. Differences between field‐scale (apparent) estimated reaction rates and isotopic fractionations and local‐scale (intrinsic) effects are poorly understood for complex natural systems. For a heterogeneous alluvial fan aquifer, numerical models and field observations were used to study the effects of physical heterogeneity on reaction parameter estimates. Field measurements included major ions, age tracers, stable isotopes, and dissolved gases. Parameters were estimated for the O2 reduction rate, denitrification rate, O2 threshold for denitrification, and stable N isotope fractionation during denitrification. For multiple geostatistical realizations of the aquifer, inverse modeling was used to establish reactive transport simulations that were consistent with field observations and served as a basis for numerical experiments to compare sample‐based estimates of “apparent” parameters with “true“ (intrinsic) values. For this aquifer, non‐Gaussian dispersion reduced the magnitudes of apparent reaction rates and isotope fractionations to a greater extent than Gaussian mixing alone. Apparent and true rate constants and fractionation parameters can differ by an order of magnitude or more, especially for samples subject to slow transport, long travel times, or rapid reactions. The effect of mixing on apparent N isotope fractionation potentially explains differences between previous laboratory and field estimates. Similarly, predicted effects on apparent O2threshold values for denitrification are consistent with previous reports of higher values in aquifers than in the laboratory. These results show that hydrogeological complexity substantially influences the interpretation and prediction of reactive transport.
A mechanistic modeling and data assimilation framework for Mojave Desert ecohydrology
Ng, Gene-Hua Crystal.; Bedford, David; Miller, David
2014-01-01
This study demonstrates and addresses challenges in coupled ecohydrological modeling in deserts, which arise due to unique plant adaptations, marginal growing conditions, slow net primary production rates, and highly variable rainfall. We consider model uncertainty from both structural and parameter errors and present a mechanistic model for the shrub Larrea tridentata (creosote bush) under conditions found in the Mojave National Preserve in southeastern California (USA). Desert-specific plant and soil features are incorporated into the CLM-CN model by Oleson et al. (2010). We then develop a data assimilation framework using the ensemble Kalman filter (EnKF) to estimate model parameters based on soil moisture and leaf-area index observations. A new implementation procedure, the “multisite loop EnKF,” tackles parameter estimation difficulties found to affect desert ecohydrological applications. Specifically, the procedure iterates through data from various observation sites to alleviate adverse filter impacts from non-Gaussianity in small desert vegetation state values. It also readjusts inconsistent parameters and states through a model spin-up step that accounts for longer dynamical time scales due to infrequent rainfall in deserts. Observation error variance inflation may also be needed to help prevent divergence of estimates from true values. Synthetic test results highlight the importance of adequate observations for reducing model uncertainty, which can be achieved through data quality or quantity.
Kawashima, Hiroko; Miyati, Tosiaki; Ohno, Naoki; Ohno, Masako; Inokuchi, Masafumi; Ikeda, Hiroko; Gabata, Toshifumi
2018-04-01
To investigate whether the parameters derived from intravoxel incoherent motion (IVIM) MRI could differentiate phyllodes tumours (PTs) from fibroadenomas (FAs) by comparing the apparent diffusion coefficient (ADC) values. This retrospective study included 7 FAs, 10 benign PTs (BPTs), 4 borderline PTs, and one malignant PT. Biexponential analyses of IVIM were performed using a 3 T MRI scanner. Quantitative IVIM parameters [pure diffusion coefficient (D), perfusion-related diffusion coefficient (D*), and fraction (f)] were calculated. The ADC was also calculated using monoexponential fitting. The D and ADC values showed an increasing tendency in the order of FA, BPT, and borderline or malignant PT (BMPT). No significant difference was found in the D value among the three groups. The ADC value of the BMPT group was significantly higher than that of the FA group (p = 0.048). The D* value showed an increasing tendency in the order of BMPT, BPT, and FA, and the D* value of the BMPT group was significantly lower than that of the FA group (p = 0.048). The D* derived from IVIM and the ADC were helpful for differentiating between FA and BMPT. Advances in knowledge: IVIM MRI examination showed that the perfusion-related diffusion coefficient is lower in borderline and malignant PTs than in FAs and the opposite is true for the ADC.
Shan, Yan; Zeng, Meng-su; Liu, Kai; Miao, Xi-Yin; Lin, Jiang; Fu, Cai xia; Xu, Peng-ju
2015-01-01
To evaluate the effect on image quality and intravoxel incoherent motion (IVIM) parameters of small hepatocellular carcinoma (HCC) from choice of either free-breathing (FB) or navigator-triggered (NT) diffusion-weighted (DW) imaging. Thirty patients with 37 small HCCs underwent IVIM DW imaging using 12 b values (0-800 s/mm) with 2 sequences: NT, FB. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) in small HCCs and liver parenchyma. Apparent diffusion coefficient (ADC) was also calculated. The acquisition time and image quality scores were assessed for 2 sequences. Independent sample t test was used to compare image quality, signal intensity ratio, IVIM parameters, and ADC values between the 2 sequences; reproducibility of IVIM parameters, and ADC values between 2 sequences was assessed with the Bland-Altman method (BA-LA). Image quality with NT sequence was superior to that with FB acquisition (P = 0.02). The mean acquisition time for FB scheme was shorter than that of NT sequence (6 minutes 14 seconds vs 10 minutes 21 seconds ± 10 seconds P < 0.01). The signal intensity ratio of small HCCs did not vary significantly between the 2 sequences. The ADC and IVIM parameters from the 2 sequences show no significant difference. Reproducibility of D*and f parameters in small HCC was poor (BA-LA: 95% confidence interval, -180.8% to 189.2% for D* and -133.8% to 174.9% for f). A moderate reproducibility of D and ADC parameters was observed (BA-LA: 95% confidence interval, -83.5% to 76.8% for D and -74.4% to 88.2% for ADC) between the 2 sequences. The NT DW imaging technique offers no advantage in IVIM parameters measurements of small HCC except better image quality, whereas FB technique offers greater confidence in fitted diffusion parameters for matched acquisition periods.
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Characterization of difference of Gaussian filters in the detection of mammographic regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catarious, David M. Jr.; Baydush, Alan H.; Floyd, Carey E. Jr.
2006-11-15
In this article, we present a characterization of the effect of difference of Gaussians (DoG) filters in the detection of mammographic regions. DoG filters have been used previously in mammographic mass computer-aided detection (CAD) systems. As DoG filters are constructed from the subtraction of two bivariate Gaussian distributions, they require the specification of three parameters: the size of the filter template and the standard deviations of the constituent Gaussians. The influence of these three parameters in the detection of mammographic masses has not been characterized. In this work, we aim to determine how the parameters affect (1) the physical descriptorsmore » of the detected regions (2) the true and false positive rates, and (3) the classification performance of the individual descriptors. To this end, 30 DoG filters are created from the combination of three template sizes and four values for each of the Gaussians' standard deviations. The filters are used to detect regions in a study database of 181 craniocaudal-view mammograms extracted from the Digital Database for Screening Mammography. To describe the physical characteristics of the identified regions, morphological and textural features are extracted from each of the detected regions. Differences in the mean values of the features caused by altering the DoG parameters are examined through statistical and empirical comparisons. The parameters' effects on the true and false positive rate are determined by examining the mean malignant sensitivities and false positives per image (FPpI). Finally, the effect on the classification performance is described by examining the variation in FPpI at the point where 81% of the malignant masses in the study database are detected. Overall, the findings of the study indicate that increasing the standard deviations of the Gaussians used to construct a DoG filter results in a dramatic decrease in the number of regions identified at the expense of missing a small number of malignancies. The sharp reduction in the number of identified regions allowed the identification of textural differences between large and small mammographic regions. We find that the classification performances of the features that achieve the lowest average FPpI are influenced by all three of the parameters.« less
Bai, Yan; Lin, Yusong; Tian, Jie; Shi, Dapeng; Cheng, Jingliang; Haacke, E. Mark; Hong, Xiaohua; Ma, Bo; Zhou, Jinyuan
2016-01-01
Purpose To quantitatively compare the potential of various diffusion parameters obtained from monoexponential, biexponential, and stretched exponential diffusion-weighted imaging models and diffusion kurtosis imaging in the grading of gliomas. Materials and Methods This study was approved by the local ethics committee, and written informed consent was obtained from all subjects. Both diffusion-weighted imaging and diffusion kurtosis imaging were performed in 69 patients with pathologically proven gliomas by using a 3-T magnetic resonance (MR) imaging unit. An isotropic apparent diffusion coefficient (ADC), true ADC, pseudo-ADC, and perfusion fraction were calculated from diffusion-weighted images by using a biexponential model. A water molecular diffusion heterogeneity index and distributed diffusion coefficient were calculated from diffusion-weighted images by using a stretched exponential model. Mean diffusivity, fractional anisotropy, and mean kurtosis were calculated from diffusion kurtosis images. All values were compared between high-grade and low-grade gliomas by using a Mann-Whitney U test. Receiver operating characteristic and Spearman rank correlation analysis were used for statistical evaluations. Results ADC, true ADC, perfusion fraction, water molecular diffusion heterogeneity index, distributed diffusion coefficient, and mean diffusivity values were significantly lower in high-grade gliomas than in low-grade gliomas (U = 109, 56, 129, 6, 206, and 229, respectively; P < .05). Pseudo-ADC and mean kurtosis values were significantly higher in high-grade gliomas than in low-grade gliomas (U = 98 and 8, respectively; P < .05). Both water molecular diffusion heterogeneity index (area under the receiver operating characteristic curve [AUC] = 0.993) and mean kurtosis (AUC = 0.991) had significantly greater AUC values than ADC (AUC = 0.866), mean diffusivity (AUC = 0.722), and fractional anisotropy (AUC = 0.500) in the differentiation of low-grade and high-grade gliomas (P < .05). Conclusion Water molecular diffusion heterogeneity index and mean kurtosis values may provide additional information and improve the grading of gliomas compared with conventional diffusion parameters. © RSNA, 2015 Online supplemental material is available for this article. PMID:26230975
Anderman, E.R.; Hill, M.C.
2000-01-01
This report documents the Hydrogeologic-Unit Flow (HUF) Package for the groundwater modeling computer program MODFLOW-2000. The HUF Package is an alternative internal flow package that allows the vertical geometry of the system hydrogeology to be defined explicitly within the model using hydrogeologic units that can be different than the definition of the model layers. The HUF Package works with all the processes of MODFLOW-2000. For the Ground-Water Flow Process, the HUF Package calculates effective hydraulic properties for the model layers based on the hydraulic properties of the hydrogeologic units, which are defined by the user using parameters. The hydraulic properties are used to calculate the conductance coefficients and other terms needed to solve the ground-water flow equation. The sensitivity of the model to the parameters defined within the HUF Package input file can be calculated using the Sensitivity Process, using observations defined with the Observation Process. Optimal values of the parameters can be estimated by using the Parameter-Estimation Process. The HUF Package is nearly identical to the Layer-Property Flow (LPF) Package, the major difference being the definition of the vertical geometry of the system hydrogeology. Use of the HUF Package is illustrated in two test cases, which also serve to verify the performance of the package by showing that the Parameter-Estimation Process produces the true parameter values when exact observations are used.
NASA Astrophysics Data System (ADS)
Li, Xibing; Du, Kun; Li, Diyuan
2015-11-01
True triaxial tests have been carried out on granite, sandstone and cement mortar using cubic specimens with the process of unloading the minor principal stress. The strengths and failure modes of the three rock materials are studied in the processes of unloading σ 3 and loading σ 1 by the newly developed true triaxial test system under different σ 2, aiming to study the mechanical responses of the rock in underground excavation at depth. It shows that the rock strength increases with the raising of the intermediate principal stress σ 2 when σ 3 is unloaded to zero. The true triaxial strength criterion by the power-law relationship can be used to fit the testing data. The "best-fitting" material parameters A and n ( A > 1.4 and n < 1.0) are almost located in the same range as expected by Al-Ajmi and Zimmerman (Int J Rock Mech Min Sci 563 42(3):431-439, 2005). It indicates that the end effect caused by the height-to-width ratio of the cubic specimens will not significantly affect the testing results under true triaxial tests. Both the strength and failure modes of cubic rock specimens under true triaxial unloading condition are affected by the intermediate principal stress. When σ 2 increases to a critical value for the strong and hard rocks (R4, R5 and R6), the rock failure mode may change from shear to slabbing. However, for medium strong and weak rocks (R3 and R2), even with a relatively high intermediate principal stress, they tend to fail in shear after a large amount of plastic deformation. The maximum extension strain criterion Stacey (Int J Rock Mech Min Sci Geomech Abstr 651 18(6):469-474, 1981) can be used to explain the change of failure mode from shear to slabbing for strong and hard rocks under true triaxial unloading test condition.
The cost of uniqueness in groundwater model calibration
NASA Astrophysics Data System (ADS)
Moore, Catherine; Doherty, John
2006-04-01
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.
Deconvolution of astronomical images using SOR with adaptive relaxation.
Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J
2011-07-04
We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Value judgments and the true self.
Newman, George E; Bloom, Paul; Knobe, Joshua
2014-02-01
The belief that individuals have a "true self" plays an important role in many areas of psychology as well as everyday life. The present studies demonstrate that people have a general tendency to conclude that the true self is fundamentally good--that is, that deep inside every individual, there is something motivating him or her to behave in ways that are virtuous. Study 1 finds that observers are more likely to see a person's true self reflected in behaviors they deem to be morally good than in behaviors they deem to be bad. Study 2 replicates this effect and demonstrates observers' own moral values influence what they judge to be another person's true self. Finally, Study 3 finds that this normative view of the true self is independent of the particular type of mental state (beliefs vs. feelings) that is seen as responsible for an agent's behavior.
Automatic user customization for improving the performance of a self-paced brain interface system.
Fatourechi, Mehrdad; Bashashati, Ali; Birch, Gary E; Ward, Rabab K
2006-12-01
Customizing the parameter values of brain interface (BI) systems by a human expert has the advantage of being fast and computationally efficient. However, as the number of users and EEG channels grows, this process becomes increasingly time consuming and exhausting. Manual customization also introduces inaccuracies in the estimation of the parameter values. In this paper, the performance of a self-paced BI system whose design parameter values were automatically user customized using a genetic algorithm (GA) is studied. The GA automatically estimates the shapes of movement-related potentials (MRPs), whose features are then extracted to drive the BI. Offline analysis of the data of eight subjects revealed that automatic user customization improved the true positive (TP) rate of the system by an average of 6.68% over that whose customization was carried out by a human expert, i.e., by visually inspecting the MRP templates. On average, the best improvement in the TP rate (an average of 9.82%) was achieved for four individuals with spinal cord injury. In this case, the visual estimation of the parameter values of the MRP templates was very difficult because of the highly noisy nature of the EEG signals. For four able-bodied subjects, for which the MRP templates were less noisy, the automatic user customization led to an average improvement of 3.58% in the TP rate. The results also show that the inter-subject variability of the TP rate is also reduced compared to the case when user customization is carried out by a human expert. These findings provide some primary evidence that automatic user customization leads to beneficial results in the design of a self-paced BI for individuals with spinal cord injury.
Magnetic Moment Quantifications of Small Spherical Objects in MRI
Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin
2014-01-01
Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517
Magnetic moment quantifications of small spherical objects in MRI.
Cheng, Yu-Chung N; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin
2015-07-01
The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values, and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Ahmadpanah, J; Ghavi Hossein-Zadeh, N; Shadparvar, A A; Pakdel, A
2017-02-01
1. The objectives of the current study were to investigate the effect of incidence rate (5%, 10%, 20%, 30% and 50%) of ascites syndrome on the expression of genetic characteristics for body weight at 5 weeks of age (BW5) and AS and to compare different methods of genetic parameter estimation for these traits. 2. Based on stochastic simulation, a population with discrete generations was created in which random mating was used for 10 generations. Two methods of restricted maximum likelihood and Bayesian approach via Gibbs sampling were used for the estimation of genetic parameters. A bivariate model including maternal effects was used. The root mean square error for direct heritabilities was also calculated. 3. The results showed that when incidence rates of ascites increased from 5% to 30%, the heritability of AS increased from 0.013 and 0.005 to 0.110 and 0.162 for linear and threshold models, respectively. 4. Maternal effects were significant for both BW5 and AS. Genetic correlations were decreased by increasing incidence rates of ascites in the population from 0.678 and 0.587 at 5% level of ascites to 0.393 and -0.260 at 50% occurrence for linear and threshold models, respectively. 5. The RMSE of direct heritability from true values for BW5 was greater based on a linear-threshold model compared with the linear model of analysis (0.0092 vs. 0.0015). The RMSE of direct heritability from true values for AS was greater based on a linear-linear model (1.21 vs. 1.14). 6. In order to rank birds for ascites incidence, it is recommended to use a threshold model because it resulted in higher heritability estimates compared with the linear model and that BW5 could be one of the main components of selection goals.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
Sheffield, Catherine A; Kane, Michael P; Bakst, Gary; Busch, Robert S; Abelseth, Jill M; Hamilton, Robert A
2009-09-01
This study compared the accuracy and precision of four value-added glucose meters. Finger stick glucose measurements in diabetes patients were performed using the Abbott Diabetes Care (Alameda, CA) Optium, Diagnostic Devices, Inc. (Miami, FL) DDI Prodigy, Home Diagnostics, Inc. (Fort Lauderdale, FL) HDI True Track Smart System, and Arkray, USA (Minneapolis, MN) HypoGuard Assure Pro. Finger glucose measurements were compared with laboratory reference results. Accuracy was assessed by a Clarke error grid analysis (EGA), a Parkes EGA, and within 5%, 10%, 15%, and 20% of the laboratory value criteria (chi2 analysis). Meter precision was determined by calculating absolute mean differences in glucose values between duplicate samples (Kruskal-Wallis test). Finger sticks were obtained from 125 diabetes patients, of which 90.4% were Caucasian, 51.2% were female, 83.2% had type 2 diabetes, and average age of 59 years (SD 14 years). Mean venipuncture blood glucose was 151 mg/dL (SD +/-65 mg/dL; range, 58-474 mg/dL). Clinical accuracy by Clarke EGA was demonstrated in 94% of Optium, 82% of Prodigy, 61% of True Track, and 77% of the Assure Pro samples (P < 0.05 for Optium and True Track compared to all others). By Parkes EGA, the True Track was significantly less accurate than the other meters. Within 5% accuracy was achieved in 34%, 24%, 29%, and 13%, respectively (P < 0.05 for Optium, Prodigy, and Assure Pro compared to True Track). Within 10% accuracy was significantly greater for the Optium, Prodigy, and Assure Pro compared to True Track. Significantly more Optium results demonstrated within 15% and 20% accuracy compared to the other meter systems. The HDI True Track was significantly less precise than the other meter systems. The Abbott Optium was significantly more accurate than the other meter systems, whereas the HDI True Track was significantly less accurate and less precise compared to the other meter systems.
TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Grady, K; Davis, S; Seuntjens, J
Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10more » × 10 cm{sup 2} Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm{sup 2} PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.
2013-12-01
We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.
Lucovnik, Miha; Chambliss, Linda R; Blumrick, Richard; Balducci, James; Gersak, Ksenija; Garfield, Robert E
2016-10-01
It has been shown that noninvasive uterine electromyography (EMG) can identify true preterm labor more accurately than methods available to clinicians today. The objective of this study was to evaluate the effect of body mass index (BMI) on the accuracy of uterine EMG in predicting preterm delivery. Predictive values of uterine EMG for preterm delivery were compared in obese versus overweight/normal BMI patients. Hanley-McNeil test was used to compare receiver operator characteristics curves in these groups. Previously reported EMG cutoffs were used to determine groups with false positive/false negative and true positive/true negative EMG results. BMI in these groups was compared with Student t test (p < 0.05 significant). A total of 88 patients were included: 20 obese, 64 overweight, and four with normal BMI. EMG predicted preterm delivery within 7 days with area under the curve = 0.95 in the normal/overweight group, and with area under the curve = 1.00 in the obese group (p = 0.08). Six patients in true preterm labor (delivering within 7 days from EMG measurement) had low EMG values (false negative group). There were no false positive results. No significant differences in patient's BMI were noted between false negative group patients and preterm labor patients with high EMG values (true positive group) and nonlabor patients with low EMG values (true negative group; p = 0.32). Accuracy of noninvasive uterine EMG monitoring and its predictive value for preterm delivery are not affected by obesity. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre
2011-02-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
1981-02-01
monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for
Scaling rates of true polar wander in convecting planets and moons
NASA Astrophysics Data System (ADS)
Rose, Ian; Buffett, Bruce
2017-12-01
Mass redistribution in the convecting mantle of a planet causes perturbations in its moment of inertia tensor. Conservation of angular momentum dictates that these perturbations change the direction of the rotation vector of the planet, a process known as true polar wander (TPW). Although the existence of TPW on Earth is firmly established, its rate and magnitude over geologic time scales remain controversial. Here we present scaling analyses and numerical simulations of TPW due to mantle convection over a range of parameter space relevant to planetary interiors. For simple rotating convection, we identify a set of dimensionless parameters that fully characterize true polar wander. We use these parameters to define timescales for the growth of moment of inertia perturbations due to convection and for their relaxation due to true polar wander. These timescales, as well as the relative sizes of convective anomalies, control the rate and magnitude of TPW. This analysis also clarifies the nature of so called "inertial interchange" TPW events, and relates them to a broader class of events that enable large and often rapid TPW. We expect these events to have been more frequent in Earth's past.
Nakano, Jinichiro
2013-02-01
The thermodynamic properties of the Fe-Mn-C system were investigated by using an analytical model constructed by a CALPHAD approach. The stacking fault energy (SFE) of the fcc structure with respect to the hcp phase was always constant at T 0 , independent of the composition and temperature when other related parameters were assumed to be constant. Experimental limits for the thermal hcp formation and the mechanical (deformation-induced) hcp formation were separated by the SFE at T 0 . The driving force for the fcc to hcp transition, defined as a dimensionless value -d G m /( RT ), was determined in the presence of Fe-rich and Mn-rich composition sets in each phase. Carbon tended to partition to the Mn-rich phase rather than to the Fe-rich phase for the compositions studied. The results obtained revealed a thermo-mechanical correlation with empirical yield strength, maximum true stress and maximum true strain. The proportionality between thermodynamics and mechanical properties is discussed.
Aagaard, Kevin; Eash, Josh D.; Ford, Walt; Heglund, Patricia J.; McDowell, Michelle; Thogmartin, Wayne E.
2018-01-01
Recent evidence suggests wild rice (Zizania palustris), an important resource for migrating waterfowl, is declining in parts of central North America, providing motivation to rigorously quantify the relationship between waterfowl and wild rice. A hierarchical mixed-effects model was applied to data on waterfowl abundance for 16 species, wild rice stem density, and two measures of water depth (true water depth at vegetation sampling locations and water surface elevation). Results provide evidence for an effect of true water depth (TWD) on wild rice abundance (posterior mean estimate for TWD coefficient, β TWD = 0.92, 95% confidence interval = 0.11—1.74), but not for an effect of wild rice stem density or water surface elevation on local waterfowl abundance (posterior mean values for relevant parameters overlapped 0). Refined protocols for sampling design and more consistent sampling frequency to increase data quality should be pursued to overcome issues that may have obfuscated relationships evaluated here.
NASA Astrophysics Data System (ADS)
Braam, Miranda; Beyrich, Frank; Bange, Jens; Platis, Andreas; Martin, Sabrina; Maronga, Björn; Moene, Arnold F.
2016-02-01
We elaborate on the preliminary results presented in Beyrich et al. (in Boundary-Layer Meteorol 144:83-112, 2012), who compared the structure parameter of temperature ({CT^2}_{}) obtained with the unmanned meteorological mini aerial vehicle (M2 AV) versus {CT^2}_{} obtained with two large-aperture scintillometers (LASs) for a limited dataset from one single experiment (LITFASS-2009). They found that {CT^2}_{} obtained from the M2 AV data is significantly larger than that obtained from the LAS data. We investigate if similar differences can be found for the flights on the other six days during LITFASS-2009 and LITFASS-2010, and whether these differences can be reduced or explained through a more elaborate processing of both the LAS data and the M2 AV data. This processing includes different corrections and measures to reduce the differences between the spatial and temporal averaging of the datasets. We conclude that the differences reported in Beyrich et al. can be found for other days as well. For the LAS-derived values the additional processing steps that have the largest effect are the saturation correction and the humidity correction. For the M2 AV -derived values the most important step is the application of the scintillometer path-weighting function. Using the true air speed of the M2 AV to convert from a temporal to a spatial structure function rather than the ground speed (as in Beyrich et al.) does not change the mean discrepancy, but it does affect {CT^2}_{} values for individual flights. To investigate whether {CT^2}_{} derived from the M2 AV data depends on the fact that the underlying temperature dataset combines spatial and temporal sampling, we used large-eddy simulation data to analyze {CT^2}_{} from virtual flights with different mean ground speeds. This analysis shows that {CT^2}_{} does only slightly depends on the true air speed when averaged over many flights.
Effect of diffusion time on liver DWI: an experimental study of normal and fibrotic livers.
Zhou, Iris Y; Gao, Darwin S; Chow, April M; Fan, Shujuan; Cheung, Matthew M; Ling, Changchun; Liu, Xiaobing; Cao, Peng; Guo, Hua; Man, Kwan; Wu, Ed X
2014-11-01
To investigate whether diffusion time (Δ) affects the diffusion measurements in liver and their sensitivity in detecting fibrosis. Liver fibrosis was induced in Sprague-Dawley rats (n = 12) by carbon tetrachloride (CCl(4)) injections. Diffusion-weighted MRI was performed longitudinally during 8-week CCl(4) administration at 7 Tesla (T) using single-shot stimulated-echo EPI with five b-values (0 to 1000 s/mm(2)) and three Δs. Apparent diffusion coefficient (ADC) and true diffusion coefficient (D(true)) were calculated by using all five b-values and large b-values, respectively. ADC and D(true) decreased with Δ for both normal and fibrotic liver at each time point. ADC and D(true) also generally decreased with the time after CCl(4) insult. The reductions in D(true) between 2-week and 4-week CCl(4) insult were larger than the ADC reductions at all Δs. At each time point, D(true) measured with long Δ (200 ms) detected the largest changes among the 3 Δs examined. Histology revealed gradual collagen deposition and presence of intracellular fat vacuoles after CCl(4) insult. Our results demonstrated the Δ dependent diffusion measurements, indicating restricted diffusion in both normal and fibrotic liver. D(true) measured with long Δ acted as a more sensitive index of the pathological alterations in liver microstructure during fibrogenesis. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shi, X.; Zhang, G.
2013-12-01
Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.
Yanai, Toshimasa; Matsuo, Akifumi; Maeda, Akira; Nakamoto, Hiroki; Mizutani, Mirai; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2017-08-01
We developed a force measurement system in a soil-filled mound for measuring ground reaction forces (GRFs) acting on baseball pitchers and examined the reliability and validity of kinetic and kinematic parameters determined from the GRFs. Three soil-filled trays of dimensions that satisfied the official baseball rules were fixed onto 3 force platforms. Eight collegiate pitchers wearing baseball shoes with metal cleats were asked to throw 5 fastballs with maximum effort from the mound toward a catcher. The reliability of each parameter was determined for each subject as the coefficient of variation across the 5 pitches. The validity of the measurements was tested by comparing the outcomes either with the true values or the corresponding values computed from a motion capture system. The coefficients of variation in the repeated measurements of the peak forces ranged from 0.00 to 0.17, and were smaller for the pivot foot than the stride foot. The mean absolute errors in the impulses determined over the entire duration of pitching motion were 5.3 N˙s, 1.9 N˙s, and 8.2 N˙s for the X-, Y-, and Z-directions, respectively. These results suggest that the present method is reliable and valid for determining selected kinetic and kinematic parameters for analyzing pitching performance.
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
Tensile Flow Behavior of Tungsten Heavy Alloys Produced by CIPing and Gelcasting Routes
NASA Astrophysics Data System (ADS)
Panchal, Ashutosh; Ravi Kiran, U.; Nandy, T. K.; Singh, A. K.
2018-04-01
Present work describes the flow behavior of tungsten heavy alloys with nominal compositions 90W-7Ni-3Fe, 93W-4.9Ni-2.1Fe, and 95W-3.5Ni-1.5Fe (wt pct) produced by CIPing and gelcasting routes. The overall microstructural features of gelcasting are finer than those of CIPing alloys. Both the grain size of W and corresponding contiguity values increase with increase in W content in the present alloys. The volume fraction of matrix phase decreases with increase in W content in both the alloys. The lattice parameter values of the matrix phase also increase with increase in W content. The yield strength (σ YS) continuously increases with increase in W content in both the alloys. The σ YS values of CIPing alloys are marginally higher than those of gelcasting at constant W. The ultimate tensile strength (σ UTS) and elongation values are maximum at intermediate W content. Present alloys exhibit two slopes in true stress-true plastic strain curves in low and high strain regimes and follow a characteristic Ludwigson relation. The two slopes are associated with two deformation mechanisms that are occurring during tensile deformation. The overall nature of differential curves of all the alloys is different and these curves contain three distinctive stages of work hardening (I, II, and III). This suggests varying deformation mechanisms during tensile testing due to different volume fractions of constituent phases. The slip is the predominant deformation mechanism of the present alloys during tensile testing.
Tensile Flow Behavior of Tungsten Heavy Alloys Produced by CIPing and Gelcasting Routes
NASA Astrophysics Data System (ADS)
Panchal, Ashutosh; Ravi Kiran, U.; Nandy, T. K.; Singh, A. K.
2018-06-01
Present work describes the flow behavior of tungsten heavy alloys with nominal compositions 90W-7Ni-3Fe, 93W-4.9Ni-2.1Fe, and 95W-3.5Ni-1.5Fe (wt pct) produced by CIPing and gelcasting routes. The overall microstructural features of gelcasting are finer than those of CIPing alloys. Both the grain size of W and corresponding contiguity values increase with increase in W content in the present alloys. The volume fraction of matrix phase decreases with increase in W content in both the alloys. The lattice parameter values of the matrix phase also increase with increase in W content. The yield strength ( σ YS) continuously increases with increase in W content in both the alloys. The σ YS values of CIPing alloys are marginally higher than those of gelcasting at constant W. The ultimate tensile strength ( σ UTS) and elongation values are maximum at intermediate W content. Present alloys exhibit two slopes in true stress-true plastic strain curves in low and high strain regimes and follow a characteristic Ludwigson relation. The two slopes are associated with two deformation mechanisms that are occurring during tensile deformation. The overall nature of differential curves of all the alloys is different and these curves contain three distinctive stages of work hardening (I, II, and III). This suggests varying deformation mechanisms during tensile testing due to different volume fractions of constituent phases. The slip is the predominant deformation mechanism of the present alloys during tensile testing.
Chipiga, L; Sydoff, M; Zvonova, I; Bernhardsson, C
2016-06-01
Positron emission tomography combined with computed tomography (PET/CT) is a quantitative technique used for diagnosing various diseases and for monitoring treatment response for different types of tumours. However, the accuracy of the data is limited by the spatial resolution of the system. In addition, the so-called partial volume effect (PVE) causes a blurring of image structures, which in turn may cause an underestimation of activity of a structure with high-activity content. In this study, a new phantom, MADEIRA (Minimising Activity and Dose with Enhanced Image quality by Radiopharmaceutical Administrations) for activity quantification in PET and single photon emission computed tomography (SPECT) was used to investigate the influence on the PVE by lesion size and tumour-to-background activity concentration ratio (TBR) in four different PET/CT systems. These measurements were compared with data from measurements with the NEMA NU-2 2001 phantom. The results with the MADEIRA phantom showed that the activity concentration (AC) values were closest to the true values at low ratios of TBR (<10) and reduced to 50 % of the actual AC values at high TBR (30-35). For all scanners, recovery of true values became closer to 1 with an increasing diameter of the lesion. The MADEIRA phantom showed good agreement with the results obtained from measurements with the NEMA NU-2 2001 phantom but allows for a wider range of possibilities in measuring image quality parameters. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
How well does CO emission measure the H2 mass of MCs?
NASA Astrophysics Data System (ADS)
Szűcs, László; Glover, Simon C. O.; Klessen, Ralf S.
2016-07-01
We present numerical simulations of molecular clouds (MCs) with self-consistent CO gas-phase and isotope chemistry in various environments. The simulations are post-processed with a line radiative transfer code to obtain 12CO and 13CO emission maps for the J = 1 → 0 rotational transition. The emission maps are analysed with commonly used observational methods, I.e. the 13CO column density measurement, the virial mass estimate and the so-called XCO (also CO-to-H2) conversion factor, and then the inferred quantities (I.e. mass and column density) are compared to the physical values. We generally find that most methods examined here recover the CO-emitting H2 gas mass of MCs within a factor of 2 uncertainty if the metallicity is not too low. The exception is the 13CO column density method. It is affected by chemical and optical depth issues, and it measures both the true H2 column density distribution and the molecular mass poorly. The virial mass estimate seems to work the best in the considered metallicity and radiation field strength range, even when the overall virial parameter of the cloud is above the equilibrium value. This is explained by a systematically lower virial parameter (I.e. closer to equilibrium) in the CO-emitting regions; in CO emission, clouds might seem (sub-)virial, even when, in fact, they are expanding or being dispersed. A single CO-to-H2 conversion factor appears to be a robust choice over relatively wide ranges of cloud conditions, unless the metallicity is low. The methods which try to take the metallicity dependence of the conversion factor into account tend to systematically overestimate the true cloud masses.
Jacobson, Linda S; McIntyre, Lauren; Mykusz, Jenny
2018-02-01
Objectives Real-time PCR provides quantitative information, recorded as the cycle threshold (Ct) value, about the number of organisms detected in a diagnostic sample. The Ct value correlates with the number of copies of the target organism in an inversely proportional and exponential relationship. The aim of the study was to determine whether Ct values could be used to distinguish between culture-positive and culture-negative samples. Methods This was a retrospective analysis of Ct values from dermatophyte PCR results in cats with suspicious skin lesions or suspected exposure to dermatophytosis. Results One hundred and thirty-two samples were included. Using culture as the gold standard, 28 were true positives, 12 were false positives and 92 were true negatives. The area under the curve for the pretreatment time point was 96.8% (95% confidence interval [CI] 94.2-99.5) compared with 74.3% (95% CI 52.6-96.0) for pooled data during treatment. Before treatment, a Ct cut-off of <35.7 (approximate DNA count 300) provided a sensitivity of 92.3% and specificity of 95.2%. There was no reliable cut-off Ct value between culture-positive and culture-negative samples during treatment. Ct values prior to treatment differed significantly between the true-positive and false-positive groups ( P = 0.0056). There was a significant difference between the pretreatment and first and second negative culture time points ( P = 0.0002 and P <0.0001, respectively). However, there was substantial overlap between Ct values for true positives and true negatives, and for pre- and intra-treatment time points. Conclusions and relevance Ct values had limited usefulness for distinguishing between culture-positive and culture-negative cases when field study samples were analyzed. In addition, Ct values were less reliable than fungal culture for determining mycological cure.
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
Urdapilleta, E; Bellotti, M; Bonetto, F J
2006-10-01
In this paper we present a model to describe the electrical properties of a confluent cell monolayer cultured on gold microelectrodes to be used with electric cell-substrate impedance sensing technique. This model was developed from microscopic considerations (distributed effects), and by assuming that the monolayer is an element with mean electrical characteristics (specific lumped parameters). No assumptions were made about cell morphology. The model has only three adjustable parameters. This model and other models currently used for data analysis are compared with data we obtained from electrical measurements of confluent monolayers of Madin-Darby Canine Kidney cells. One important parameter is the cell-substrate height and we found that estimates of this magnitude strongly differ depending on the model used for the analysis. We analyze the origin of the discrepancies, concluding that the estimates from the different models can be considered as limits for the true value of the cell-substrate height.
Limits of detection and decision. Part 3
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
NASA Astrophysics Data System (ADS)
Magga, Zoi; Tzovolou, Dimitra N.; Theodoropoulou, Maria A.; Tsakiroglou, Christos D.
2012-03-01
The risk assessment of groundwater pollution by pesticides may be based on pesticide sorption and biodegradation kinetic parameters estimated with inverse modeling of datasets from either batch or continuous flow soil column experiments. In the present work, a chemical non-equilibrium and non-linear 2-site sorption model is incorporated into solute transport models to invert the datasets of batch and soil column experiments, and estimate the kinetic sorption parameters for two pesticides: N-phosphonomethyl glycine (glyphosate) and 2,4-dichlorophenoxy-acetic acid (2,4-D). When coupling the 2-site sorption model with the 2-region transport model, except of the kinetic sorption parameters, the soil column datasets enable us to estimate the mass-transfer coefficients associated with solute diffusion between mobile and immobile regions. In order to improve the reliability of models and kinetic parameter values, a stepwise strategy that combines batch and continuous flow tests with adequate true-to-the mechanism analytical of numerical models, and decouples the kinetics of purely reactive steps of sorption from physical mass-transfer processes is required.
Ebadi, M R; Sedghi, M; Golian, A; Ahmadi, H
2011-10-01
Accurate knowledge of true digestible amino acid (TDAA) contents of feedstuffs is necessary to accurately formulate poultry diets for profitable production. Several experimental approaches that are highly expensive and time consuming have been used to determine available amino acids. Prediction of the nutritive value of a feed ingredient from its chemical composition via regression methodology has been attempted for many years. The artificial neural network (ANN) model is a powerful method that may describe the relationship between digestible amino acid contents and chemical composition. Therefore, multiple linear regressions (MLR) and ANN models were developed for predicting the TDAA contents of sorghum grain based on chemical composition. A precision-fed assay trial using cecectomized roosters was performed to determine the TDAA contents in 48 sorghum samples from 12 sorghum varieties differing in chemical composition. The input variables for both MLR and ANN models were CP, ash, crude fiber, ether extract, and total phenols whereas the output variable was each individual TDAA for every sample. The results of this study revealed that it is possible to satisfactorily estimate the TDAA of sorghum grain through its chemical composition. The chemical composition of sorghum grain seems to highly influence the TDAA contents when considering components such as CP, crude fiber, ether extract, ash and total phenols. It is also possible to estimate the TDAA contents through multiple regression equations with reasonable accuracy depending on composition. However, a more satisfactory prediction may be achieved via ANN for all amino acids. The R(2) values for the ANN model corresponding to testing and training parameters showed a higher accuracy of prediction than equations established by the MLR method. In addition, the current data confirmed that chemical composition, often considered in total amino acid prediction, could be also a useful predictor of true digestible values of selected amino acids for poultry.
Editor’s message: Groundwater modeling fantasies - Part 1, adrift in the details
Voss, Clifford I.
2011-01-01
Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it. …Simplicity does not precede complexity, but follows it. (Epigrams in Programming by Alan Perlis, a computer scientist; Perlis 1982).A doctoral student creating a groundwater model of a regional aquifer put individual circular regions around data points where he had hydraulic head measurements, so that each region’s parameter values could be adjusted to get perfect fit with the measurement at that point. Nearly every measurement point had its own parameter-value region. After calibration, the student was satisfied because his model correctly reproduced all of his data. Did he really get the true field values of parameters in this manner? Did this approach result in a realistic, meaningful and useful groundwater model?—truly doubtful. Is this story a sign of a common style of educating hydrogeology students these days? Where this is the case, major changes are needed to add back ‘common-sense hydrogeology’ to the curriculum. Worse, this type of modeling approach has become an industry trend in application of groundwater models to real systems, encouraged by the advent of automatic model calibration software that has no problem providing numbers for as many parameter value estimates as desired. Just because a computer program can easily create such values does not mean that they are in any sense useful—but unquestioning practitioners are happy to follow such software developments, perhaps because of an implied promise that highly parameterized models, here referred to as ‘complex’, are somehow superior. This and other fallacies are implicit in groundwater modeling studies, most usually not acknowledged when presenting results. This two-part Editor’s Message deals with the state of groundwater modeling: part 1 (here) focuses on problems and part 2 (Voss 2011) on prospects.
NASA Astrophysics Data System (ADS)
Melnikova, I.; Mukai, S.; Vasilyev, A.
Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic formula inversion for optically thick stratus clouds. The model of horizontally infinite layer is considered. The slight horizontal heterogeneity is approximately taken into account. Formulas containing only the measured values of two-direction radiance and functions of solar and view angles were derived earlier. The 6 azimuth harmonics of reflection function are taken into account. The simple approximation of the cloud top boarder heterogeneity is used. The clouds, projecting upper the cloud top plane causes the increase of diffuse radiation in the incident flux. It is essential for calculation of radiative characteristics, which depends on lighting conditions. Escape and reflection functions describe this dependence for reflected radiance and local albedo of semi-infinite medium - for irradiance. Thus the functions depending on solar incident angle is to replace by their modifications. Firstly optical thickness of every pixel is obtained with simple formula assuming conservative scattering for all available view directions. Deviations between obtained values may be taken as a measure of the cloud top deviation from the plane. The special parameter is obtained, which takes into account the shadowing effect. Then single scattering albedo and optical thickness (with the true absorption assuming) are obtained for pairs of view directions with equal optical thickness. After that the averaging of values obtained and relative error evaluation is accomplished for all viewing directions of every pixel. The procedure is repeated for all wavelengths and pixels independently.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
Cepero-Betancourt, Yamira; Oliva-Moresco, Patricio; Pasten-Contreras, Alexis; Tabilo-Munizaga, Gipsy; Pérez-Won, Mario; Moreno-Osorio, Luis; Lemus-Mondaca, Roberto
2017-10-01
Abalone (Haliotis spp.) is an exotic seafood product recognized as a protein source of high biological value. Traditional methods used to preserve foods such as drying technology can affect their nutritional quality (protein quality and digestibility). A 28-day rat feeding study was conducted to evaluate the effects of the drying process assisted by high-pressure impregnation (HPI) (350, 450, and 500 MPa × 5 min) on chemical proximate and amino acid compositions and nutritional parameters, such as protein efficiency ratio (PER), true digestibility (TD), net protein ratio, and protein digestibility corrected amino acid score (PDCAAS) of dried abalone. The HPI-assisted drying process ensured excellent protein quality based on PER values, regardless of the pressure level. At 350 and 500 MPa, the HPI-assisted drying process had no negative effect on TD and PDCAAS then, based on nutritional parameters analysed, we recommend HPI-assisted drying process at 350 MPa × 5 min as the best process condition to dry abalone. Variations in nutritional parameters compared to casein protein were observed; nevertheless, the high protein quality and digestibility of HPI-assisted dried abalones were maintained to satisfy the metabolic demands of human beings.
The Full and True Value of Campus Heritage
ERIC Educational Resources Information Center
Elefante, Carl
2011-01-01
To gain a full and true understanding of the value of campus heritage requires shifting perspective. On many campuses, heritage resources are perceived to have no relevance whatsoever to the challenges of sustainability. This results largely from a profound misconception about what may constitute a sustainable future and what steps may be needed…
Software for determining the true displacement of faults
NASA Astrophysics Data System (ADS)
Nieto-Fuentes, R.; Nieto-Samaniego, Á. F.; Xu, S.-S.; Alaniz-Álvarez, S. A.
2014-03-01
One of the most important parameters of faults is the true (or net) displacement, which is measured by restoring two originally adjacent points, called “piercing points”, to their original positions. This measurement is not typically applicable because it is rare to observe piercing points in natural outcrops. Much more common is the measurement of the apparent displacement of a marker. Methods to calculate the true displacement of faults using descriptive geometry, trigonometry or vector algebra are common in the literature, and most of them solve a specific situation from a large amount of possible combinations of the fault parameters. True displacements are not routinely calculated because it is a tedious and tiring task, despite their importance and the relatively simple methodology. We believe that the solution is to develop software capable of performing this work. In a previous publication, our research group proposed a method to calculate the true displacement of faults by solving most combinations of fault parameters using simple trigonometric equations. The purpose of this contribution is to present a computer program for calculating the true displacement of faults. The input data are the dip of the fault; the pitch angles of the markers, slickenlines and observation lines; and the marker separation. To prevent the common difficulties involved in switching between operative systems, the software is developed using the Java programing language. The computer program could be used as a tool in education and will also be useful for the calculation of the true fault displacement in geological and engineering works. The application resolves the cases with known direction of net slip, which commonly is assumed parallel to the slickenlines. This assumption is not always valid and must be used with caution, because the slickenlines are formed during a step of the incremental displacement on the fault surface, whereas the net slip is related to the finite slip.
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.
Worldwide Survey of Alcohol and Nonmedical Drug Use among Military Personnel: 1982,
1983-01-01
cell . The first number is an estimate of the percentage of the population with the characteristics that define the cell . The second number, in...multiplying 1.96 times the standard error for that cell . (Obviously, for very small or very large estimates, the respective smallest or largest value in...that the cell proportions estimate the true population value more precisely, and larger standard errors indicate that the true population value is
Analyzing time-ordered event data with missed observations.
Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P
2017-09-01
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.
Schmidt, Benedikt R
2003-08-01
The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
NASA Astrophysics Data System (ADS)
Lokoshchenko, A.; Teraud, W.
2018-04-01
The work describes an experimental research of creep of cylindrical tensile test specimens made of aluminum alloy D16T at a constant temperature of 400°C. The issue to be examined was the necking at different values of initial tensile stresses. The use of a developed noncontacting measuring system allowed us to see variations in the specimen shape and to estimate the true stress in various times. Based on the obtained experimental data, several criteria were proposed for describing the point of time at which the necking occurs (necking point). Calculations were carried out at various values of the parameters in these criteria. The relative interval of deformation time in which the test specimen is uniformly stretched was also determined.
[The diagnostic value of cine-MR imaging in diseases of great vessels].
Sasaki, S; Yoshida, H; Matsui, Y; Sakuma, M; Yasuda, K; Tanabe, T; Chouji, H
1990-02-01
The diagnostic value of cine magnetic resonance imaging (cine-MRI) was evaluated in 10 patients with disease of great vessels. The parameters necessary to decide the appropriate treatment, such as presence and extension of intimal flap, DeBakey type classification, identification of the entry, differentiation between true and false lumen, and between thrombosis and slow flow were demonstrated in all patients with dissecting aortic aneurysm. However, abdominal aortic branches could not be demonstrated enough by cine-MRI, therefore conventional AOG was necessary to choose the operative procedure in these cases. In patients with thoracic aortic aneurysm (TAA), cine-MRI was valuable in demonstrating both blood flow and thrombus in the lumen of aneurysm, and AOG was thought to be unnecessary in most cases. Cine-MRI is a promising new technique for the evaluation of diseases of great vessels.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-02
... of the true value of the audit sample to the compliance authority (state, local or EPA Regional... AASP. The AASP would report the true value of the audit sample to the compliance authority (state..., validating, disturbing and reporting the audit results. Changes in the Estimates: There are no changes in the...
Solar Maps Development: How the Maps Were Made | Geospatial Data Science |
10% of a true measured value within the grid cell. Due to terrain effects and other microclimate effects and other microclimate influences, the local cloud cover can vary significantly even within a approximately 10% of a true measured value within the grid cell. Due to terrain effects and other microclimate
Joint inversion of regional and teleseismic earthquake waveforms
NASA Astrophysics Data System (ADS)
Baker, Mark R.; Doser, Diane I.
1988-03-01
A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.
Wendl, Brigitte; Kamenica, A; Droschl, H; Jakse, N; Weiland, F; Wendl, T; Wendl, M
2017-03-01
Despite recommendations for early treatment of hereditary Angle Class III syndrome, late pubertal growth may cause a relapse requiring surgical intervention. This study was performed to identify predictors of successful Class III treatment. Thirty-eight Class III patients treated with a chincup were retrospectively analyzed. Data were collected from the data archive, cephalograms, and casts, including pretreatment (T0) and posttreatment (T1) data, as well as long-term follow-up data collected approximately 25 years after treatment (T2). Each patient was assigned to a success or a failure group. Data were analyzed based on time (T0, T1, T2), deviations from normal (Class I), and prognathism types (true mandibular prognathism, maxillary retrognathism, combined pro- and retrognathism). Compared to Class I normal values, the data obtained in both groups yielded 11 significant parameters. The success group showed values closer to normal at all times (T0, T1, T2) and vertical parameters decreased from T0 to T2. The failure group showed higher values for vertical and horizontal mandibular growth, as well as dentally more protrusion of the lower anterior teeth and more negative overjet at all times. In adittion, total gonial and upper gonial angle were higher at T0 and T1. A prognostic score-yet to be evaluated in clinical practice-was developed from the results. The failure group showed greater amounts of horizontal development during the years between T1 and T2. Treatment of true mandibular prognathism achieved better outcomes in female patients. Cases of maxillary retrognathism were treated very successfully without gender difference. Failure was clearly more prevalent, again without gender difference, among the patients with combined mandibular prognathism and maxillary retrognathism. Crossbite situations were observed in 44% of cases at T0. Even though this finding had been resolved by T1, it relapsed in 16% of the cases by T2. The failure rate increased in cases of combined mandibular prognathism and maxillary retrognathism. Precisely in these combined Class III situations, it should be useful to apply the diagnostic and prognostic parameters identified in the present study and to provide the patients with specific information about the increased risk of failure.
Image Restoration for Fluorescence Planar Imaging with Diffusion Model
Gong, Yuzhu; Li, Yang
2017-01-01
Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843
Dynamical systems theory for nonlinear evolution equations.
Choudhuri, Amitava; Talukdar, B; Das, Umapada
2010-09-01
We observe that the fully nonlinear evolution equations of Rosenau and Hymann, often abbreviated as K(n,m) equations, can be reduced to Hamiltonian form only on a zero-energy hypersurface belonging to some potential function associated with the equations. We treat the resulting Hamiltonian equations by the dynamical systems theory and present a phase-space analysis of their stable points. The results of our study demonstrate that the equations can, in general, support both compacton and soliton solutions. For the K(2,2) and K(3,3) cases one type of solutions can be obtained from the other by continuously varying a parameter of the equations. This is not true for the K(3,2) equation for which the parameter can take only negative values. The K(2,3) equation does not have any stable point and, in the language of mechanics, represents a particle moving with constant acceleration.
NASA Astrophysics Data System (ADS)
Han, Jianguang; Wang, Yun; Yu, Changqing; Chen, Peng
2017-02-01
An approach for extracting angle-domain common-image gathers (ADCIGs) from anisotropic Gaussian beam prestack depth migration (GB-PSDM) is presented in this paper. The propagation angle is calculated in the process of migration using the real-value traveltime information of Gaussian beam. Based on the above, we further investigate the effects of anisotropy on GB-PSDM, where the corresponding ADCIGs are extracted to assess the quality of migration images. The test results of the VTI syncline model and the TTI thrust sheet model show that anisotropic parameters ɛ, δ, and tilt angle 𝜃, have a great influence on the accuracy of the migrated image in anisotropic media, and ignoring any one of them will cause obvious imaging errors. The anisotropic GB-PSDM with the true anisotropic parameters can obtain more accurate seismic images of subsurface structures in anisotropic media.
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Dependence and independence of survival parameters on linear energy transfer in cells and tissues
Ando, Koichi; Goodhead, Dudley T.
2016-01-01
Carbon-ion radiotherapy has been used to treat more than 9000 cancer patients in the world since 1994. Spreading of the Bragg peak is necessary for carbon-ion radiotherapy, and is designed based on the linear–quadratic model that is commonly used for photon therapy. Our recent analysis using in vitro cell kills and in vivo mouse tissue reaction indicates that radiation quality affects mainly the alpha terms, but much less the beta terms, which raises the question of whether this is true in other biological systems. Survival parameters alpha and beta for 45 in vitro mammalian cell lines were obtained by colony formation after irradiation with carbon ions, fast neutrons and X-rays. Relationships between survival parameters and linear energy transfer (LET) below 100 keV/μm were obtained for 4 mammalian cell lines. Mouse skin reaction and tumor growth delay were measured after fractionated irradiation. The Fe-plot provided survival parameters of the tissue reactions. A clear separation between X-rays and high-LET radiation was observed for alpha values, but not for beta values. Alpha values/terms increased with increasing LET in any cells and tissues studied, while beta did not show a systematic change. We have found a puzzle or contradiction in common interpretations of the linear-quadratic model that causes us to question whether the model is appropriate for interpreting biological effectiveness of high-LET radiation up to 500 keV/μm, probably because of inconsistency in the concept of damage interaction. A repair saturation model proposed here was good enough to fit cell kill efficiency by radiation of wide-ranged LET. A model incorporating damage complexity and repair saturation would be suitable for heavy-ion radiotherapy. PMID:27380803
Singh, Ashish Kumar; Ganeshkar, Sanjay V.; Mehrotra, Praveen; Bhagchandani, Jitendra
2013-01-01
Background: Commonly used parameters for anteroposterior assessment of the jaw relationship includes several analyses such as ANB, NA-Pog, AB-NPog, Wits appraisal, Harvold's unit length difference, Beta angle. Considering the fact that there are several parameters (with different range and values) which account for sagittal relation, and still the published literature for comparisons and correlation of these measurements is scarce. Therefore, the objective of this study was to correlate these values in subjects of Indian origin. Materials and Methods: The sample consisted of fifty adult individuals (age group 18-26 years) with equal number of males and females. The selection criteria included subjects with no previous history of orthodontic and/or orthognathic surgical treatment; orthognathic facial profile; Angle's Class I molar relation; clinical Frankfort Mandibular plane angle FMA of 30±5° and no gross facial asymmetry. The cephalograms were taken in natural head position (NHP). Seven sagittal skeletal parameters were measured in the cephalograms and subjected to statistical evaluation with Wits reading on the true horizontal as reference. A correlation coefficient analysis was done to assess the significance of association between these variables. Results: ANB angle showed statistically significant correlation for the total sample, though the values were insignificant for the individual groups and therefore may not be very accurate. Wits appraisal was seen to have a significant correlation only in the female sample group. Conclusions: If cephalograms cannot be recorded in a NHP, then the best indicator for recording A-P skeletal dimension would be angle AB-NPog, followed by Harvold's unit length difference. However, considering biologic variability, more than one reading should necessarily be used to verify the same. PMID:24987638
NASA Astrophysics Data System (ADS)
Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward
2016-12-01
SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.
Shende, Ravindra; Patel, Ganesh
2017-01-01
Objective of present study is to determine optimum value of DLG and its validation prior to being incorporated in TPS for Varian TrueBeam™ millennium 120 leaves MLC. Partial transmission through the rounded leaf ends of the Multi Leaf Collimator (MLC) causes a conflict between the edges of the light field and radiation field. Parameter account for this partial transmission is called Dosimetric Leaf Gap (DLG). The complex high precession technique, such as Intensity Modulated Radiation Therapy (IMRT), entails the modeling of optimum value of DLG inside Eclipse Treatment Planning System (TPS) for precise dose calculation. Distinct synchronized uniformed extension of sweeping dynamic MLC leaf gap fields created by Varian MLC shaper software were use to determine DLG. DLG measurements performed with both 0.13 cc semi-flex ionization chamber and 2D-Array I-Matrix were used to validate the DLG; similarly, values of DLG from TPS were estimated from predicted dose. Similar mathematical approaches were employed to determine DLG from delivered and TPS predicted dose. DLG determined from delivered dose measured with both ionization chamber (DLG Ion ) and I-Matrix (DLG I-Matrix ) compared with DLG estimate from TPS predicted dose (DLG TPS ). Measurements were carried out for all available 6MV, 10MV, 15MV, 6MVFFF and 10MVFFF beam energies. Maximum and minimum DLG deviation between measured and TPS calculated DLG was found to be 0.2 mm and 0.1 mm, respectively. Both of the measured DLGs (DLG Ion and DLG I-Matrix ) were found to be in a very good agreement with estimated DLG from TPS (DLG TPS ). Proposed method proved to be helpful in verifying and validating the DLG value prior to its clinical implementation in TPS.
Epidemiologic research using probabilistic outcome definitions.
Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S
2015-01-01
Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Han, Y.; Misra, S.
2018-04-01
Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.
Comparison of T2, T1rho, and diffusion metrics in assessment of liver fibrosis in rats.
Zhang, Hui; Yang, Qihua; Yu, Taihui; Chen, Xiaodong; Huang, Jingwen; Tan, Cui; Liang, Biling; Guo, Hua
2017-03-01
To evaluate the value of T 2 , T 1 rho, and diffusion metrics in assessment of liver fibrosis in rats. Liver fibrosis in a rat model (n = 72) was induced by injection of carbon tetrachloride (CCl 4 ) at 3T. T 2 , T 1 rho, and diffusion parameters (apparent diffusion coefficient (ADC), D true ) via spin echo (SE) diffusion-weighted imaging (DWI) and stimulated echo acquisition mode (STEAM) DWI with three diffusion times (DT: 80, 106, 186 msec) were obtained in surviving rats with hepatic fibrosis (n = 52) and controls (n = 8). Liver fibrosis stage (F0-F6) was identified based on pathological results using the traditional liver fibrosis staging method for rodents. Nonparametric statistical methods and receiver operating characteristic (ROC) curve analysis were employed to determine the diagnostic accuracy. Mean T 2 , T 1 rho, ADC, and D true with DT = 186 msec correlated with the severity of fibrosis with r = 0.73, 0.83, -0.83, and -0.85 (all P < 0.001), respectively. The average areas under the ROC curve at different stages for T 1 rho and diffusion parameters (DT = 186 msec) were larger than those of T 2 and SE DWI (0.92, 0.92, and 0.92 vs. 0.86, 0.82, and 0.83). The corresponding average sensitivity and specificity for T 1 rho and diffusion parameters with a long DT were larger (89.35 and 88.90, 88.36 and 89.97, 90.16 and 87.13) than T 2 and SE DWI (90.28 and 79.93, 85.30 and 77.64, 78.21 and 82.41). The performances of T 1 rho and D true (DT = 186 msec) were comparable (average AUC: 0.92 and 0.92). Among the evaluated sequences, T 1 rho and STEAM DWI with a long DT may serve as superior imaging biomarkers for assessing liver fibrosis and monitoring disease severity. 1 J. Magn. Reson. Imaging 2017;45:741-750. © 2016 International Society for Magnetic Resonance in Medicine.
Fisher information theory for parameter estimation in single molecule microscopy: tutorial
Chao, Jerry; Ward, E. Sally; Ober, Raimund J.
2016-01-01
Estimation of a parameter of interest from image data represents a task that is commonly carried out in single molecule microscopy data analysis. The determination of the positional coordinates of a molecule from its image, for example, forms the basis of standard applications such as single molecule tracking and localization-based superresolution image reconstruction. Assuming that the estimator used recovers, on average, the true value of the parameter, its accuracy, or standard deviation, is then at best equal to the square root of the Cramér-Rao lower bound. The Cramér-Rao lower bound can therefore be used as a benchmark in the evaluation of the accuracy of an estimator. Additionally, as its value can be computed and assessed for different experimental settings, it is useful as an experimental design tool. This tutorial demonstrates a mathematical framework that has been specifically developed to calculate the Cramér-Rao lower bound for estimation problems in single molecule microscopy and, more broadly, fluorescence microscopy. The material includes a presentation of the photon detection process that underlies all image data, various image data models that describe images acquired with different detector types, and Fisher information expressions that are necessary for the calculation of the lower bound. Throughout the tutorial, examples involving concrete estimation problems are used to illustrate the effects of various factors on the accuracy of parameter estimation, and more generally, to demonstrate the flexibility of the mathematical framework. PMID:27409706
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swain, Adam
2013-07-01
As the areas of application for diverse filter types increases, the mechanics and material sciences associated with the hardware and its relationship with more and more arduous process environments becomes critical to the successful and reliable operation of the filtration equipment. Where the filter is the last safe barrier between the process and the life environment, structural integrity and reliability is paramount in both the validation and the ethical acceptability of the designed equipment. Core collapse is a key factor influencing filter element selection, and is an extremely complex issue with a number of variables and failure mechanisms. It ismore » becoming clear that the theory behind core collapse calculations is not always supported with real tested data. In exploring this issue we have found that the calculation method is not always reflective of the true as tested collapse value, with the calculated values being typically in excess or even an order of magnitude higher than the tested values. The above claim is supported by a case study performed by the author, which disproves most of what was previously understood to be true. This paper also aims to explore the various failure mechanisms of different configurations of filter core, comparing calculated collapse values against real tested values, with a view to understanding a method of calculating their true collapse value. As the technology is advancing, and filter elements are being used in higher temperature, higher pressure, more radioactive and more chemically aggressive environments, confidence in core collapse values and data is crucial. (authors)« less
Flight test evaluation of predicted light aircraft drag, performance, and stability
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Fox, S. R.
1979-01-01
A technique was developed which permits simultaneous extraction of complete lift, drag, and thrust power curves from time histories of a single aircraft maneuver such as a pullup (from V sub max to V sub stall) and pushover (to sub V max for level flight.) The technique is an extension to non-linear equations of motion of the parameter identification methods of lliff and Taylor and includes provisions for internal data compatibility improvement as well. The technique was show to be capable of correcting random errors in the most sensitive data channel and yielding highly accurate results. This technique was applied to flight data taken on the ATLIT aircraft. The drag and power values obtained from the initial least squares estimate are about 15% less than the 'true' values. If one takes into account the rather dirty wing and fuselage existing at the time of the tests, however, the predictions are reasonably accurate. The steady state lift measurements agree well with the extracted values only for small values of alpha. The predicted value of the lift at alpha = 0 is about 33% below that found in steady state tests while the predicted lift slope is 13% below the steady state value.
Nakano, Jinichiro
2013-01-01
The thermodynamic properties of the Fe–Mn–C system were investigated by using an analytical model constructed by a CALPHAD approach. The stacking fault energy (SFE) of the fcc structure with respect to the hcp phase was always constant at T0, independent of the composition and temperature when other related parameters were assumed to be constant. Experimental limits for the thermal hcp formation and the mechanical (deformation-induced) hcp formation were separated by the SFE at T0. The driving force for the fcc to hcp transition, defined as a dimensionless value –dGm/(RT), was determined in the presence of Fe-rich and Mn-rich composition sets in each phase. Carbon tended to partition to the Mn-rich phase rather than to the Fe-rich phase for the compositions studied. The results obtained revealed a thermo-mechanical correlation with empirical yield strength, maximum true stress and maximum true strain. The proportionality between thermodynamics and mechanical properties is discussed. PMID:27877555
Characterizing the True Background Corona with SDO/AIA
NASA Technical Reports Server (NTRS)
Napier, Kate; Winebarger, Amy; Alexander, Caroline
2014-01-01
Characterizing the nature of the solar coronal background would enable scientists to more accurately determine plasma parameters, and may lead to a better understanding of the coronal heating problem. Because scientists study the 3D structure of the Sun in 2D, any line of sight includes both foreground and background material, and thus, the issue of background subtraction arises. By investigating the intensity values in and around an active region, using multiple wavelengths collected from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) over an eight-hour period, this project aims to characterize the background as smooth or structured. Different methods were employed to measure the true coronal background and create minimum intensity images. These were then investigated for the presence of structure. The background images created were found to contain long-lived structures, including coronal loops, that were still present in all of the wavelengths, 193 Angstroms,171 Angstroms,131 Angstroms, and 211 Angstroms. The intensity profiles across the active region indicate that the background is much more structured than previously thought.
Nakano, Jinichiro
2013-03-15
Thermodynamic properties of the Fe-Mn-C system were investigated by using an analytical model constructed by a CALPHAD approach. Stacking fault energy (SFE) of the fcc structure with respect to the hcp phase was always constant at T 0, independent of composition and temperature when the other related parameters were assumed to be constant. Experimental limits for the thermal hcp formation and the mechanical (deformation-induced) hcp formation were separated by the SFE at T 0. The driving force for the fcc to hcp transition, defined as a dimensionless value –dG m/(RT), was determined in the presence of Fe-rich and Mn-rich compositionmore » sets in each phase. Carbon tended to partition to the Mn-rich phase rather than to the Fe-rich phase for the studied compositions. The obtained results revealed a thermo-mechanical correlation with empirical yield strength, maximum true stress and maximum true strain. The proportionality between thermodynamics and mechanical properties is discussed.« less
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
NASA Astrophysics Data System (ADS)
Christopher, J.; Choudhary, B. K.; Isaac Samuel, E.; Mathew, M. D.; Jayakumar, T.
2012-01-01
Tensile flow behaviour of P9 steel with different silicon content has been examined in the framework of Hollomon, Ludwik, Swift, Ludwigson and Voce relationships for a wide temperature range (300-873 K) at a strain rate of 1.3 × 10 -3 s -1. Ludwigson equation described true stress ( σ)-true plastic strain ( ɛ) data most accurately in the range 300-723 K. At high temperatures (773-873 K), Ludwigson equation reduces to Hollomon equation. The variations of instantaneous work hardening rate ( θ = dσ/ dɛ) and θσ with stress indicated two-stage work hardening behaviour. True stress-true plastic strain, flow parameters, θ vs. σ and θσ vs. σ with respect to temperature exhibited three distinct temperature regimes and displayed anomalous behaviour due to dynamic strain ageing at intermediate temperatures. Rapid decrease in flow stress and flow parameters, and rapid shift in θ- σ and θσ- σ towards lower stresses with increase in temperature indicated dominance of dynamic recovery at high temperatures.
Tiedeman, Claire; Hill, Mary C.
2007-01-01
When simulating natural and engineered groundwater flow and transport systems, one objective is to produce a model that accurately represents important aspects of the true system. However, using direct measurements of system characteristics, such as hydraulic conductivity, to construct a model often produces simulated values that poorly match observations of the system state, such as hydraulic heads, flows and concentrations (for example, Barth et al., 2001). This occurs because of inaccuracies in the direct measurements and because the measurements commonly characterize system properties at different scales from that of the model aspect to which they are applied. In these circumstances, the conservation of mass equations represented by flow and transport models can be used to test the applicability of the direct measurements, such as by comparing model simulated values to the system state observations. This comparison leads to calibrating the model, by adjusting the model construction and the system properties as represented by model parameter values, so that the model produces simulated values that reasonably match the observations.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
Hirsch index and truth survival in clinical research.
Poynard, Thierry; Thabut, Dominique; Munteanu, Mona; Ratziu, Vlad; Benhamou, Yves; Deckmyn, Olivier
2010-08-06
Factors associated with the survival of truth of clinical conclusions in the medical literature are unknown. We hypothesized that publications with a first author having a higher Hirsch' index value (h-I), which quantifies and predicts an individual's scientific research output, should have a longer half-life. 474 original articles concerning cirrhosis or hepatitis published from 1945 to 1999 were selected. The survivals of the main conclusions were updated in 2009. The truth survival was assessed by time-dependent methods (Kaplan Meier method and Cox). A conclusion was considered to be true, obsolete or false when three or more observers out of the six stated it to be so. 284 out of 474 conclusions (60%) were still considered true, 90 (19%) were considered obsolete and 100 (21%) false. The median of the h-I was=24 (range 1-85). Authors with true conclusions had significantly higher h-I (median=28) than those with obsolete (h-I=19; P=0.002) or false conclusions (h-I=19; P=0.01). The factors associated (P<0.0001) with h-I were: scientific life (h-I=33 for>30 years vs. 16 for<30 years), -methodological quality score (h-I=36 for high vs. 20 for low scores), and -positive predictive value combining power, ratio of true to not-true relationships and bias (h-I=33 for high vs. 20 for low values). In multivariate analysis, the risk ratio of h-I was 1.003 (95%CI, 0.994-1.011), and was not significant (P=0.56). In a subgroup restricted to 111 articles with a negative conclusion, we observed a significant independent prognostic value of h-I (risk ratio=1.033; 95%CI, 1.008-1.059; P=0.009). Using an extrapolation of h-I at the time of article publication there was a significant and independent prognostic value of baseline h-I (risk ratio=0.027; P=0.0001). The present study failed to clearly demonstrate that the h-index of authors was a prognostic factor for truth survival. However the h-index was associated with true conclusions, methodological quality of trials and positive predictive values.
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-01-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called ‘rain’. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based ‘experience matrix’ that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. PMID:27077048
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-03-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.
A Dempster-Shafer Method for Multi-Sensor Fusion
2012-03-01
position results for DST Mean Mod (or DST Mean for Case 1) to follow the same general path as the true aircraft . Also, if the points diverged from the true...of the aircraft . Its results were similar to those for DST True and Kalman. Also DST Mean had the same clustering of points as the others specifically...DST Mean values increasingly diverged as time t increased. Then Run 27 was very similar to Case 2 Run 5. Instead of following the true aircraft path
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
NASA Astrophysics Data System (ADS)
Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun
2014-04-01
We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.
NASA Astrophysics Data System (ADS)
Nawa, Kenji; Nakamura, Kohji; Akiyama, Toru; Ito, Tomonori; Weinert, Michael
Effective on-site Coulomb interactions (Ueff) and electron configurations in the localized d and f orbitals of metal complexes in transition-metal oxides and organometallic molecules, play a key role in the first-principles search for the true ground-state. However, wide ranges of values in the Ueff parameter of a material, even in the same ionic state, are often reported. Here, we revisit this issue from constraint density functional theory (DFT) by using the full-potential linearized augmented plane wave method. The Ueff parameters for prototypical transition-metal oxides, TMO (TM =Mn, Fe, Co, Ni), were calculated by the second derivative of the total energy functional with respect to the d occupation numbers inside the muffin-tin (MT) spheres as a function of the sphere radius. We find that the calculated Ueff values depend significantly on the MT radius, with a variation of more than 3 eV when the MT radius changes from 2.0 to 2.7 a.u., but importantly an identical valence band structure can be produced in all the cases, with an approximate scaling of Ueff. This indicates that a simple transferability of the Ueff value among different calculation methods is not allowed. We further extend the constraint DFT to treat various electron configurations of the localized d-orbitals in organometallic molecules, TMCp2 (TM =Cr, Mn, Fe, Co, Ni), and find that the calculated Ueff values can reproduce the experimentally determined ground-state electron configurations.
Determination of the measurement threshold in gamma-ray spectrometry.
Korun, M; Vodenik, B; Zorko, B
2017-03-01
In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gabaude, C M; Guillot, M; Gautier, J C; Saudemon, P; Chulia, D
1999-07-01
Compressibility properties of pharmaceutical materials are widely characterized by measuring the volume reduction of a powder column under pressure. Experimental data are commonly analyzed using the Heckel model from which powder deformation mechanisms are determined using mean yield pressure (Py). Several studies from the literature have shown the effects of operating conditions on the determination of Py and have pointed out the limitations of this model. The Heckel model requires true density and compacted mass values to determine Py from force-displacement data. It is likely that experimental errors will be introduced when measuring the true density and compacted mass. This study investigates the effects of true density and compacted mass on Py. Materials having different particle deformation mechanisms are studied. Punch displacement and applied pressure are measured for each material at two compression speeds. For each material, three different true density and compacted mass values are utilized to evaluate their effect on Py. The calculated variation of Py reaches 20%. This study demonstrates that the errors in measuring true density and compacted mass have a greater effect on Py than the errors incurred from not correcting the displacement measurements due to punch elasticity.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
ERIC Educational Resources Information Center
He, Yong
2013-01-01
Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert R.; Parker, Pauline A.
2011-01-01
This study investigates the comparability of two item response theory based equating methods: true score equating (TSE), and estimated true equating (ETE). Additionally, six scaling methods were implemented within each equating method: mean-sigma, mean-mean, two versions of fixed common item parameter, Stocking and Lord, and Haebara. Empirical…
Observed Score and True Score Equating Procedures for Multidimensional Item Response Theory
ERIC Educational Resources Information Center
Brossman, Bradley Grant
2010-01-01
The purpose of this research was to develop observed score and true score equating procedures to be used in conjunction with the Multidimensional Item Response Theory (MIRT) framework. Currently, MIRT scale linking procedures exist to place item parameter estimates and ability estimates on the same scale after separate calibrations are conducted.…
Araki, Tetsuro; Sholl, Lynette M.; Gerbaudo, Victor H.; Hatabu, Hiroto; Nishino, Mizuki
2014-01-01
OBJECTIVE The purpose of this article is to investigate the imaging characteristics of pathologically proven thymic hyperplasia and to identify features that can differentiate true hyperplasia from lymphoid hyperplasia. MATERIALS AND METHODS Thirty-one patients (nine men and 22 women; age range, 20–68 years) with pathologically confirmed thymic hyperplasia (18 true and 13 lymphoid) who underwent preoperative CT (n = 27), PET/CT (n = 5), or MRI (n = 6) were studied. The length and thickness of each thymic lobe and the transverse and anterior-posterior diameters and attenuation of the thymus were measured on CT. Thymic morphologic features and heterogeneity on CT and chemical shift on MRI were evaluated. Maximum standardized uptake values were measured on PET. Imaging features between true and lymphoid hyperplasia were compared. RESULTS No significant differences were observed between true and lymphoid hyperplasia in terms of thymic length, thickness, diameters, morphologic features, and other qualitative features (p > 0.16). The length, thickness, and diameters of thymic hyperplasia were significantly larger than the mean values of normal glands in the corresponding age group (p < 0.001). CT attenuation of lymphoid hyperplasia was significantly higher than that of true hyperplasia among 15 patients with contrast-enhanced CT (median, 47.9 vs 31.4 HU; Wilcoxon p = 0.03). The receiver operating characteristic analysis yielded greater than 41.2 HU as the optimal threshold for differentiating lymphoid hyperplasia from true hyperplasia, with 83% sensitivity and 89% specificity. A decrease of signal intensity on opposed-phase images was present in all four cases with in- and opposed-phase imaging. The mean maximum standardized uptake value was 2.66. CONCLUSION CT attenuation of the thymus was significantly higher in lymphoid hyperplasia than in true hyperplasia, with an optimal threshold of greater than 41.2 HU in this cohort of patients with pathologically confirmed thymic hyperplasia. PMID:24555583
40 CFR Table 4 to Subpart Ooo of... - Operating Parameter Levels
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Operating Parameter Levels 4 Table 4 to..., Table 4 Table 4 to Subpart OOO of Part 63—Operating Parameter Levels Device Parameters to be monitored... concentration level or reading at outlet of device Maximum organic HAP concentration or reading a 25 to 50 mm...
[The early pregnancy factor (EPF) as an early marker of disorders in pregnancy].
Straube, W; Römer, T; Zeenni, L; Loh, M
1995-01-01
The early pregnancy factor (EPF) seems to be very helpful in clinical applications such as early detection of pregnancy, differential diagnosis of failure of fertilization or implementation and prognosis of a fertilized ovum. Our purpose was to investigate the diagnostic value of single and serial measurement of EPF, especially in the differential diagnosis of abortion and extrauterine pregnancy. Women with a history of 6-16 weeks amenorrhoea with/without vaginal bleeding were included in the prospective study. The EPF-test system was carried out by means of the rosette inhibition method. EPF proved to be always positive in normal pregnant women and always negative in nonpregnant controls. In case of threatened abortion the prognosis was good, when the EPF values were positive, and poor when they became negative. Patients suffering from spontaneous and missed abortion mostly showed negative EPF-values. This was also true in ectopic pregnancies. The sensitivity and specificity of EPF-test system were 83%. The positive predictive value was observed to be 54% and the negative predictive value 95%. The EPF as an early embryonic signal may be a suitable parameter for the clinical use detecting pregnancy disturbances very early.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
V2.2 L2AS Detailed Release Description April 15, 2002
Atmospheric Science Data Center
2013-03-14
... 'optically thick atmosphere' algorithm. Implement new experimental aerosol retrieval algorithm over homogeneous surface types. ... Change values: cloud_mask_decision_matrix(1,1): .true. -> .false. cloud_mask_decision_matrix(2,1): .true. -> .false. ...
van Reenen, Mari; Westerhuis, Johan A; Reinecke, Carolus J; Venter, J Hendrik
2017-02-02
ERp is a variable selection and classification method for metabolomics data. ERp uses minimized classification error rates, based on data from a control and experimental group, to test the null hypothesis of no difference between the distributions of variables over the two groups. If the associated p-values are significant they indicate discriminatory variables (i.e. informative metabolites). The p-values are calculated assuming a common continuous strictly increasing cumulative distribution under the null hypothesis. This assumption is violated when zero-valued observations can occur with positive probability, a characteristic of GC-MS metabolomics data, disqualifying ERp in this context. This paper extends ERp to address two sources of zero-valued observations: (i) zeros reflecting the complete absence of a metabolite from a sample (true zeros); and (ii) zeros reflecting a measurement below the detection limit. This is achieved by allowing the null cumulative distribution function to take the form of a mixture between a jump at zero and a continuous strictly increasing function. The extended ERp approach is referred to as XERp. XERp is no longer non-parametric, but its null distributions depend only on one parameter, the true proportion of zeros. Under the null hypothesis this parameter can be estimated by the proportion of zeros in the available data. XERp is shown to perform well with regard to bias and power. To demonstrate the utility of XERp, it is applied to GC-MS data from a metabolomics study on tuberculosis meningitis in infants and children. We find that XERp is able to provide an informative shortlist of discriminatory variables, while attaining satisfactory classification accuracy for new subjects in a leave-one-out cross-validation context. XERp takes into account the distributional structure of data with a probability mass at zero without requiring any knowledge of the detection limit of the metabolomics platform. XERp is able to identify variables that discriminate between two groups by simultaneously extracting information from the difference in the proportion of zeros and shifts in the distributions of the non-zero observations. XERp uses simple rules to classify new subjects and a weight pair to adjust for unequal sample sizes or sensitivity and specificity requirements.
Donini, Lorenzo M; Poggiogalle, Eleonora; Molfino, Alessio; Rosano, Aldo; Lenzi, Andrea; Rossi Fanelli, Filippo; Muscaritoli, Maurizio
2016-10-01
Malnutrition plays a major role in clinical and functional impairment in older adults. The use of validated, user-friendly and rapid screening tools for malnutrition in the elderly may improve the diagnosis and, possibly, the prognosis. The aim of this study was to assess the agreement between Mini-Nutritional Assessment (MNA), considered as a reference tool, MNA short form (MNA-SF), Malnutrition Universal Screening Tool (MUST), and Nutrition Risk Screening (NRS-2002) in elderly institutionalized participants. Participants were enrolled among nursing home residents and underwent a multidimensional evaluation. Predictive value and survival analysis were performed to compare the nutritional classifications obtained from the different tools. A total of 246 participants (164 women, age: 82.3 ± 9 years, and 82 men, age: 76.5 ± 11 years) were enrolled. Based on MNA, 22.6% of females and 17% of males were classified as malnourished; 56.7% of women and 61% of men were at risk of malnutrition. Agreement between MNA and MUST or NRS-2002 was classified as "fair" (k = 0.270 and 0.291, respectively; P < .001), whereas the agreement between MNA and MNA-SF was classified as "moderate" (k = 0.588; P < .001). Because of the high percentage of false negative participants, MUST and NRS-2002 presented a low overall predictive value compared with MNA and MNA-SF. Clinical parameters were significantly different in false negative participants with MUST or NRS-2002 from true negative and true positive individuals using the reference tool. For all screening tools, there was a significant association between malnutrition and mortality. MNA showed the best predictive value for survival among well-nourished participants. Functional, psychological, and cognitive parameters, not considered in MUST and NRS-2002 tools, are probably more important risk factors for malnutrition than acute illness in geriatric long-term care inpatient settings and may account for the low predictive value of these tests. MNA-SF seems to combine the predictive capacity of the full version of the MNA with a sufficiently short time of administration. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Neutrosophic segmentation of breast lesions for dedicated breast CT
NASA Astrophysics Data System (ADS)
Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.
2017-03-01
We proposed the neutrosophic approach for segmenting breast lesions in breast Computer Tomography (bCT) images. The neutrosophic set (NS) considers the nature and properties of neutrality (or indeterminacy), which is neither true nor false. We considered the image noise as an indeterminate component, while treating the breast lesion and other breast areas as true and false components. We first transformed the image into the NS domain. Each voxel in the image can be described as its membership in True, Indeterminate, and False sets. Operations α-mean, β-enhancement, and γ-plateau iteratively smooth and contrast-enhance the image to reduce the noise level of the true set. Once the true image no longer changes, we applied one existing algorithm for bCT images, the RGI segmentation, on the resulting image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used a total of 122 breast lesions (44 benign, 78 malignant) of 123 non-contrasted bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the DICE coefficient. The average DICE value of the NS-RGI was 0.82 (STD: 0.09), while that of the RGI was 0.8 (STD: 0.12). The difference between the two DICE values was statistically significant (paired t test, p-value = 0.0007). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI (AUC = 0.8) improved over that of the RGI (AUC = 0.69, p-value = 0.006).
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Kakite, Suguru; Dyvorne, Hadrien; Besa, Cecilia; Cooper, Nancy; Facciuto, Marcelo; Donnerhack, Claudia; Taouli, Bachir
2015-01-01
To evaluate short-term test-retest and interobserver reproducibility of IVIM (intravoxel incoherent motion) diffusion parameters and ADC (apparent diffusion coefficient) of hepatocellular carcinoma (HCC) and liver parenchyma at 3.0T. In this prospective Institutional Review Board (IRB)-approved study, 11 patients were scanned twice using a free-breathing single-shot echo-planar-imaging, diffusion-weighted imaging (DWI) sequence using 4 b values (b = 0, 50, 500, 1000 s/mm(2)) and IVIM DWI using 16 b values (0-800 s/mm(2)) at 3.0T. IVIM parameters (D: true diffusion coefficient, D*: pseudodiffusion coefficient, PF: perfusion fraction) and ADC (using 4 b and 16 b) were calculated. Short-term test-retest and interobserver reproducibility of IVIM parameters and ADC were assessed by measuring correlation coefficient, coefficient of variation (CV), and Bland-Altman limits of agreements (BA-LA). Fifteen HCCs were assessed in 10 patients. Reproducibility of IVIM metrics in HCC was poor for D* and PF (mean CV 60.6% and 37.3%, BA-LA: -161.6% to 135.3% and -66.2% to 101.0%, for D* and PF, respectively), good for D and ADC (CV 19.7% and <16%, BA-LA -57.4% to 36.3% and -38.2 to 34.1%, for D and ADC, respectively). Interobserver reproducibility was on the same order of test-retest reproducibility except for PF in HCC. Reproducibility of diffusion parameters was better in liver parenchyma compared to HCC. Poor reproducibility of D*/PF and good reproducibility for D/ADC were observed in HCC and liver parenchyma. These findings may have implications for trials using DWI in HCC. © 2014 Wiley Periodicals, Inc.
Najih, Hayat; Arous, Salim; Laarje, Aziza; Baghdadi, Dalila; Benouna, Mohamed Ghali; Azzouzi, Leila; Habbal, Rachida
2016-01-01
Rheumatic mitral valve stenosis (MVS) is a frequent valvulopathy in developing countries. However, industrialized countries have seen the emergence of new etiologies of MVS in recent years, in particular drug-induced and/or toxic valvular regurgitation and stenosis. For this reason, the echocardiographic assessment of MVS and especially the definition of objective diagnostic criteria for severe MVS remains relevant. The objectives are: to determine whether there is a direct causal link between mean transmitral gradient (MTG) and severity of MVS in patients with severe MVS or true severe MVS (primary criterion); to analyze different parameters determining mean transmitral gradient (MTG) (secondary criterion). We conducted a single-center cross-sectional study including all patients with severe or true severe MVS admitted to the Department of Cardiology, University Hospital Ibn Rushd, Casablanca over a period of one year (January 2014-December 2014). We analyzed data from two groups of patients separately: those with a mean transmitral gradient<10 mmHg (group 1) and those with a gradient>10mmHg (group 2). 50 patients with severe or true severe MVS have been included in the study. The average age of our patients was 41.7 years with a female predominance (sex ratio 0,25). 64% of patients had severe MVS and 36% of patients had true severe MVS. 52% (26 patients) had MTG < 10mmHg and 48% (24 patients) had mean gradient> 10mmHg, suggesting no direct correlation between the severity of MVS and MTG (Pearson's correlation coefficient R: -0,137). With regards to dyspnea, 80% of patients of group 1 had stage II NYHA dyspnea (classification system) and 70% of patients of group 2 had stage III NYHA dyspnea (41%) or IV NYHA dyspnea (29%), which means that there was a significant correlation between MTG and the severity of dyspnea (R: 0,586 and p: 0,001). The analytical study of heart rate and the presence of cardiac decompensation compared with mean gradient transmitral showed a significant correlation. Indeed, among patients in group 1, 96% had HR between 60 and 100 bpm and no patient had decompensated heart failure. In group 2, 54% (13 patients) had a HR> 100 bpm and 7 of them (53%) had left decompensated heart failure. The analysis of systolic pulmonary artery pressure conducted in both groups of the study revealed the existence of a statistically significant correlation (R: 0,518 and P: 0,001) between systolic pulmonary artery pressure (SPAP) and MTG. Ventricular rhythm regularity and right ventricular function were not correlated with MTG (R: 0,038 and R: - 0,002 respectively). Mean transmitral gradient is a good indicator of mitral stenosis tolerance but it imperfectly reflects mitral stenosis severity as this depends on several hemodynamic parameters. True severe mitral stenosis may have mean transmitral gradient < 10mmHg, that is why the value of MTG should never be interpreted as single value.
320-row CT renal perfusion imaging in patients with aortic dissection: A preliminary study.
Liu, Dongting; Liu, Jiayi; Wen, Zhaoying; Li, Yu; Sun, Zhonghua; Xu, Qin; Fan, Zhanming
2017-01-01
To investigate the clinical value of renal perfusion imaging in patients with aortic dissection (AD) using 320-row computed tomography (CT), and to determine the relationship between renal CT perfusion imaging and various factors of aortic dissection. Forty-three patients with AD who underwent 320-row CT renal perfusion before operation were prospectively enrolled in this study. Diagnosis of AD was confirmed by transthoracic echocardiography. Blood flow (BF) of bilateral renal perfusion was measured and analyzed. CT perfusion imaging signs of AD in relation to the type of AD, number of entry tears and the false lumen thrombus were observed and compared. The BF values of patients with type A AD were significantly lower than those of patients with type B AD (P = 0.004). No significant difference was found in the BF between different numbers of intimal tears (P = 0.288), but BF values were significantly higher in cases with a false lumen without thrombus and renal arteries arising from the true lumen than in those with thrombus (P = 0.036). The BF values measured between the true lumen, false lumen and overriding groups were different (P = 0.02), with the true lumen group having the highest. Also, the difference in BF values between true lumen and false lumen groups was statistically significant (P = 0.016), while no statistical significance was found in the other two groups (P > 0.05). The larger the size of intimal entry tears, the greater the BF values (P = 0.044). This study shows a direct correlation between renal CT perfusion changes and AD, with the size, number of intimal tears, different types of AD, different renal artery origins and false lumen thrombosis, significantly affecting the perfusion values.
Peripheries of epicycles in the Grahalāghava
NASA Astrophysics Data System (ADS)
Rao, S. Balachandra; Vanaja, V.; Shailaja, M.
2017-12-01
For finding the true positions of the Sun, the Moon and the five planets the Indian classical astronomical texts use the concept of the manda epicycle which accounts for the equation of the centre. In addition, in the case of the five planets (Mercury, Venus, Mars, Jupiter and Saturn) another equation called śīghraphala and the corresponding śīghra epicycle are adopted. This correction corresponds to the transformation of the true heliocentric longitude to the true geocentric longitude in modern astronomy. In some of the popularly used handbooks (karaṇa) instead of giving the mathematical expressions for the above said equations, their discrete numerical values, at intervals of 15 degrees, are given. In the present paper using the data of discrete numerical values we build up continuous functions of periodic terms for the manda and śīghra equations. Further, we obtain the critical points and the maximum values for these two equations.
NASA Astrophysics Data System (ADS)
Mannino, Irene; Cianfarra, Paola; Salvini, Francesco
2010-05-01
Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is computed (RMS), representing the effectiveness of the fit and so the validity of this analysis. Eventually, the method selects the set of parameters that produced the least values. The tested algorithm describes the expected H/S values as a function of the distance from the fault core (D), the clay content (S), and the fault throw (T). The preliminary results of the Montecarlo inversion show that the distance (D) has the most effective influence in the H/S spatial distribution and the H/S value decreases with the distance from the fault-core. The rheological parameter shows a value similar to the diagenetic H/S values (1-1.5). The resulting equation has a reasonable RMS value of 0.116. The results of the Montecarlo models were finally implemented in FRAP, a fault environment modelling software. It is a true 4D tool that can predict stress conditions and permeability architecture associated to a given faults during single or multiple tectonic events. We present some models of fault-related fracturing among the studied faults performed by FRAP and we compare them with the field measurements, to test the validity of our methodology.
Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.
Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G
2016-07-26
The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel models is publicly available. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corns, R; Zhao, Y; Huang, V
2016-06-15
Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less
SU-E-J-19: An Intra-Institutional Study of Cone-Beam CT Dose for Image-Guided Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knutson, N; Present Address: Mount Sinai Roosevelt Hospital, New York, NY; Rankine, L
2015-06-15
Purpose: To determine the variability of Cone-Beam CT Dose Index (CB-CTDI) across multiple on-board imaging (OBI) systems within a single institution, and compare this to manufacturer provided data. Methods: The CB-CTDI was measured on three Trilogy and three TrueBeam Varian OBI systems, for six different clinically used scan protocols. Measurements were taken using a 10 cm long CT ionization chamber in either a 16 cm (head-simulating) or 32 cm (body-simulating) diameter, acrylic, cylindrical, 15 cm long CTDI phantom. We assessed the variation in CB-CTDI between the OBI systems and compared our measured values to the data provided by the manufacturer.more » Results: The standard error in the CB-CTDI measured for all protocols was found to be within ±2% and ±5% of the mean for TrueBeam and Trilogy, respectively. For all head scan protocols, the measured TrueBeam values were lower than the manufacturer’s reported values, with a maximum difference of 13.9% and an average difference of 11%. For the body scan protocols, the TrueBeam measured values were 3% and 13% greater than the manufacturer’s reported values for two out of three protocols, and 38% lower than reported for the third protocol. In total, 7/18 CB-CTDI measurements fell within the manufacturers specified range (±10%). Across all scans the Truebeam machines were found to have a lower CB-CTDI than Trilogy, particularly the head scan protocols, which show decreases of up to 30% . Conclusion: The intra-institutional variation of CB-CTDI was found to be clinically acceptable at less than 5%. For the TrueBeam OBI system, over half of the measured scans failed to fall with in the manufactured quoted range of 10%, however, all measured values were within 15% of the manufacturer’s reported values. For accurate assessment and reporting of imaging dose to radiotherapy patients, our results indicate a need for standardization in CB-CTDI measurement technique.« less
Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze
2018-04-01
Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.
Pulido-Moran, M; Bullon, P; Morillo, J M; Battino, M; Quiles, J L; Ramirez-Tortosa, MCarmen
2017-05-01
To examine the insulin resistance measured by surrogate indices in subjects with and without periodontitis and to find out any correlation among dietary intake with insulin resistance. Fifty-five patients were recruited to participate in this cross-sectional study. Insulin resistance measured by the homoeostasis model assessment (HOMA-IR) and the quantitative insulin sensitivity check index moreover glycaemia, creatinine, uric acid, high density lipoproteins, low density lipoproteins, very low density lipoproteins and triglycerides among others. True periodontal disease was elucidated through the examination of probing pocket depth, clinical attachment level, recession of the gingival margin and gingival bleeding. The statistical analyses used were the student's T-test for independent variables, Kolmogorov-Smirnov if variations were homogeneous; if not, the Mann-Whitney U Test was applied instead. Correlations between variables were assessed using Pearson's correlation coefficients. True periodontal disease was confirmed through the greater values of probing pocket depth, clinical attachment level, gingival margin and gingival bleeding in the periodontitis group in comparison with non-periodontitis group. Insulin resistance was evidenced by the greater values of HOMA-IR as well as by the lower quantitative insulin sensitivity check index values in the periodontitis group. Fasting insulin, glucose, uric acid, creatinine, low density lipoproteins, triglycerides and very low density lipoprotein levels were significant higher in periodontitis group. Pearson's correlations did not show any association among diet data and insulin resistance parameters in periodontitis patients. A putative systemic relationship between insulin resistance and periodontitis exists but it does not seem conceivable any effect of diet over such relationship. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nascimento, Francisco O; Yang, Solomon; Larrauri-Reyes, Maiteder; Pineda, Andres M; Cornielle, Vertilio; Santana, Orlando; Heimowitz, Todd B; Stone, Gregg W; Beohar, Nirat
2014-02-01
The presentation of stress cardiomyopathy (SC) with nonobstructive coronary artery disease mimics that of ST-segment elevation myocardial infarction (STEMI) due to coronary occlusion. No single parameter has been successful in differentiating the 2 entities. We thus sought to develop a noninvasive clinical tool to discriminate between these 2 conditions. We retrospectively reviewed 59 consecutive cases of SC at our institution from July 2005 through June 2011 and compared those with 60 consecutives cases of angiographically confirmed STEMI treated with primary percutaneous coronary intervention in the same period. All patients underwent acute echocardiography, and the peak troponin I level was determined. The troponin-ejection fraction product (TEFP) was derived by multiplying the peak troponin I level and the echocardiographically derived left ventricular ejection fraction. Comparing the SC and STEMI groups, the mean left ventricular ejection fraction at the time of presentation was 30 ± 9% versus 44 ± 11%, respectively (p <0.001), and the peak troponin I was 7.6 ± 18 versus 102.2 ± 110.3 ng/dl, respectively (p <0.001). The mean TEFP was thus 182 ± 380 and 4,088 ± 4,244 for the SC and STEMI groups, respectively (p <0.001). Receiver operating characteristic curve analysis showed that a TEFP value ≥250 had a sensitivity of 95%, a specificity of 87%, a negative predictive value of 94%, a positive predictive value of 88%, and an overall accuracy of 91% to differentiate a true STEMI from SC (C-statistic 0.91 ± 0.02, p <0.001). In conclusion, for patients not undergoing emergent angiography, the TEFP may be used with high accuracy to differentiate SC with nonobstructive coronary artery disease from true STEMI due to coronary occlusion. Copyright © 2014 Elsevier Inc. All rights reserved.
Diagnosis of the "large medial meniscus" of the knee on MR imaging.
Samoto, Nobuhiko; Kozuma, Masakazu; Tokuhisa, Toshio; Kobayashi, Kunio
2006-11-01
Although several quantitative magnetic resonance (MR) diagnostic criteria for discoid lateral meniscus (DLM) have been described, there are no criteria by which to estimate the size of the medial meniscus. We define a medial meniscus that exceeds the normal size as a "large medial meniscus" (LMM), and the purpose of this study is to establish the quantitative MR diagnostic criteria for LMM. The MR imaging findings of 96 knees with arthroscopically confirmed intact semilunar lateral meniscus (SLM), 18 knees with intact DLM, 105 knees with intact semilunar medial meniscus (SMM) and 4 knees with torn LMM were analyzed. The following three quantitative parameters were measured: (a) meniscal width (MW): the minimum MW on the coronal slice; (b) ratio of the meniscus to the tibia (RMT): the ratio of minimum MW to maximum tibial width on the coronal slice; (c) continuity of the anterior and posterior horns (CAPH): the number of consecutive 5-mm-thick sagittal slices showing continuity between the anterior horn and the posterior horn of the meniscus on sagittal slices. Using logistic discriminant analysis between intact SLM and DLM groups and using descriptive statistics of intact SLM and SMM groups, the cutoff values used to discriminate LMM from SMM were calculated by MW and RMT. Moreover, the efficacy of these cutoff values and three slices of the cutoff values for CAPH were estimated in the medial meniscus group. "MW> or =11 mm" and "RMT> or =15%" were determined to be effective diagnostic criteria for LMM, while three of four cases in the torn LMM group were true positives and specificity was 99% in both criteria. When "CAPH> or =3 slices" was used as a criterion, three of four torn LMM cases were true positives and specificity was 93%.
Hippeläinen, Eero; Mäkelä, Teemu; Kaasalainen, Touko; Kaleva, Erna
2017-12-01
Developments in single photon emission tomography instrumentation and reconstruction methods present a potential for decreasing acquisition times. One of such recent options for myocardial perfusion imaging (MPI) is IQ-SPECT. This study was motivated by the inconsistency in the reported ejection fraction (EF) and left ventricular (LV) volume results between IQ-SPECT and more conventional low-energy high-resolution (LEHR) collimation protocols. IQ-SPECT and LEHR quantitative results were compared while the equivalent number of iterations (EI) was varied. The end-diastolic (EDV) and end-systolic volumes (ESV) and the derived EF values were investigated. A dynamic heart phantom was used to produce repeatable ESVs, EDVs and EFs. Phantom performance was verified by comparing the set EF values to those measured from a gated multi-slice X-ray computed tomography (CT) scan (EF True ). The phantom with an EF setting of 45, 55, 65 and 70% was imaged with both IQ-SPECT and LEHR protocols. The data were reconstructed with different EI, and two commonly used clinical myocardium delineation software were used to evaluate the LV volumes. The CT verification showed that the phantom EF settings were repeatable and accurate with the EF True being within 1% point from the manufacture's nominal value. Depending on EI both MPI protocols can be made to produce correct EF estimates, but IQ-SPECT protocol produced on average 41 and 42% smaller EDV and ESV when compared to the phantom's volumes, while LEHR protocol underestimated volumes by 24 and 21%, respectively. The volume results were largely similar between the delineation methods used. The reconstruction parameters can greatly affect the volume estimates obtained from perfusion studies. IQ-SPECT produces systematically smaller LV volumes than the conventional LEHR MPI protocol. The volume estimates are also software dependent.
Theoretical Advances in Sequential Data Assimilation for the Atmosphere and Oceans
NASA Astrophysics Data System (ADS)
Ghil, M.
2007-05-01
We concentrate here on two aspects of advanced Kalman--filter-related methods: (i) the stability of the forecast- assimilation cycle, and (ii) parameter estimation for the coupled ocean-atmosphere system. The nonlinear stability of a prediction-assimilation system guarantees the uniqueness of the sequentially estimated solutions in the presence of partial and inaccurate observations, distributed in space and time; this stability is shown to be a necessary condition for the convergence of the state estimates to the true evolution of the turbulent flow. The stability properties of the governing nonlinear equations and of several data assimilation systems are studied by computing the spectrum of the associated Lyapunov exponents. These ideas are applied to a simple and an intermediate model of atmospheric variability and we show that the degree of stabilization depends on the type and distribution of the observations, as well as on the data assimilation method. These results represent joint work with A. Carrassi, A. Trevisan and F. Uboldi. Much is known by now about the main physical mechanisms that give rise to and modulate the El-Nino/Southern- Oscillation (ENSO), but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean-atmosphere model of ENSO. Model behavior is very sensitive to two key parameters: (a) "mu", the ocean-atmosphere coupling coefficient between the sea-surface temperature (SST) and wind stress anomalies; and (b) "delta-s", the surface-layer coefficient. Previous work has shown that "delta- s" determines the period of the model's self-sustained oscillation, while "mu' measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Assimilation of SST data from the NCEP- NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean-atmosphere GCMs will be discussed. These results arise from joint work with D. Kondrashov and C.-j. Sun.
Virtual Volatility, an Elementary New Concept with Surprising Stock Market Consequences
NASA Astrophysics Data System (ADS)
Prange, Richard; Silva, A. Christian
2006-03-01
Textbook investors start by predicting the future price distribution, PDF, of a candidate stock (or portfolio) at horizon T, e.g. a year hence. A (log)normal PDF with center (=drift =expected return) μT and width (=volatility) σT is often assumed on Central Limit Theorem grounds, i.e. by a random walk of daily (log)price increments δs. The standard deviation, stdev, of historical (ex post) δs `s is usually a fair predictor of the coming year's (ex ante) stdev(δs) = σdaily, but the historical mean E(δs) at best roughly limits the true, to be predicted, drift by μtrueT˜ μhistT ± σhistT. Textbooks take a PDF with σ ˜ σdaily and μ as somehow known, as if accurate predictions of μ were possible. It is elementary and presumably new to argue that an average of PDF's over a range of μ values should be taken, e.g. an average over forecasts by different analysts. We estimate that this leads to a PDF with a `virtual' volatility σ ˜ 1.3σdaily. It is indeed clear that uncertainty in the value of the expected gain parameter increases the risk of investment in that security by most measures, e. g. Sharpe's ratio μT/σT will be 30% smaller because of this effect. It is significant and surprising that there are investments which benefit from this 30% virtual increase in the volatility
Parameter Estimation in Astronomy with Poisson-Distributed Data. 1; The (CHI)2(gamma) Statistic
NASA Technical Reports Server (NTRS)
Mighell, Kenneth J.
1999-01-01
Applying the standard weighted mean formula, [Sigma (sub i)n(sub i)ssigma(sub i, sup -2)], to determine the weighted mean of data, n(sub i), drawn from a Poisson distribution, will, on average, underestimate the true mean by approx. 1 for all true mean values larger than approx.3 when the common assumption is made that the error of the i th observation is sigma(sub i) = max square root of n(sub i), 1).This small, but statistically significant offset, explains the long-known observation that chi-square minimization techniques which use the modified Neyman'chi(sub 2) statistic, chi(sup 2, sub N) equivalent Sigma(sub i)((n(sub i) - y(sub i)(exp 2)) / max(n(sub i), 1), to compare Poisson - distributed data with model values, y(sub i), will typically predict a total number of counts that underestimates the true total by about 1 count per bin. Based on my finding that weighted mean of data drawn from a Poisson distribution can be determined using the formula [Sigma(sub i)[n(sub i) + min(n(sub i), 1)](n(sub i) + 1)(exp -1)] / [Sigma(sub i)(n(sub i) + 1)(exp -1))], I propose that a new chi(sub 2) statistic, chi(sup 2, sub gamma) equivalent, should always be used to analyze Poisson- distributed data in preference to the modified Neyman's chi(exp 2) statistic. I demonstrated the power and usefulness of,chi(sub gamma, sup 2) minimization by using two statistical fitting techniques and five chi(exp 2) statistics to analyze simulated X-ray power - low 15 - channel spectra with large and small counts per bin. I show that chi(sub gamma, sup 2) minimization with the Levenberg - Marquardt or Powell's method can produce excellent results (mean slope errors approx. less than 3%) with spectra having as few as 25 total counts.
Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat
2013-07-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.
Huysal, Kağan; Budak, Yasemin U; Karaca, Ayse Ulusoy; Aydos, Murat; Kahvecioğlu, Serdar; Bulut, Mehtap; Polat, Murat
2013-01-01
Urinary tract infection (UTI) is one of the most common types of infection. Currently, diagnosis is primarily based on microbiologic culture, which is time- and labor-consuming. The aim of this study was to assess the diagnostic accuracy of urinalysis results from UriSed (77 Electronica, Budapest, Hungary), an automated microscopic image-based sediment analyzer, in predicting positive urine cultures. We examined a total of 384 urine specimens from hospitalized patients and outpatients attending our hospital on the same day for urinalysis, dipstick tests and semi-quantitative urine culture. The urinalysis results were compared with those of conventional semiquantitative urine culture. Of 384 urinary specimens, 68 were positive for bacteriuria by culture, and were thus considered true positives. Comparison of these results with those obtained from the UriSed analyzer indicated that the analyzer had a specificity of 91.1%, a sensitivity of 47.0%, a positive predictive value (PPV) of 53.3% (95% confidence interval (CI) = 40.8-65.3), and a negative predictive value (NPV) of 88.8% (95% CI = 85.0-91.8%). The accuracy was 83.3% when the urine leukocyte parameter was used, 76.8% when bacteriuria analysis of urinary sediment was used, and 85.1% when the bacteriuria and leukocyturia parameters were combined. The presence of nitrite was the best indicator of culture positivity (99.3% specificity) but had a negative likelihood ratio of 0.7, indicating that it was not a reliable clinical test. Although the specificity of the UriSed analyzer was within acceptable limits, the sensitivity value was low. Thus, UriSed urinalysis resuIts do not accurately predict the outcome of culture.
Sudarski, Sonja; Henzler, Thomas; Floss, Teresa; Gaa, Tanja; Meyer, Mathias; Haubenreisser, Holger; Schoenberg, Stefan O; Attenberger, Ulrike I
2018-05-02
To compare in patients with untreated rectal cancer quantitative perfusion parameters calculated from 3 rd -generation dual-source dynamic volume perfusion CT (dVPCT) with 3-Tesla-MR-perfusion with regard to data variability and tumour differentiation. In MR-perfusion, plasma flow (PF), plasma volume (PV) and mean transit time (MTT) were assessed in two measurements (M1 and M2) by the same reader. In dVPCT, blood flow (BF), blood volume (BV), MTT and permeability (PERM) were assessed respectively. CT dose values were calculated. 20 patients (60 ± 13 years) were analysed. Intra-individual and intra-reader variability of duplicate MR-perfusion measurements was higher compared to duplicate dVPCT measurements. dVPCT-derived BF, BV and PERM could differentiate between tumour and normal rectal wall (significance level for M1 and M2, respectively, regarding BF: p < 0.0001*/0.0001*; BV: p < 0.0001*/0.0001*; MTT: p = 0.93/0.39; PERM: p < 0.0001*/0.0001*), with MR-perfusion this was true for PF and PV (p-values M1/M2 for PF: p = 0.04*/0.01*; PV: p = 0.002*/0.003*; MTT: p = 0.70/0.27*). Mean effective dose of CT-staging incl. dVPCT was 29 ± 6 mSv (20 ± 5 mSv for dVPCT alone). In conclusion, dVPCT has a lower data variability than MR-perfusion while both dVPCT and MR-perfusion could differentiate tumour tissue from normal rectal wall. With 3 rd -generation dual-source CT dVPCT could be included in a standard CT-staging without exceeding national dose reference values.
Micro-Doppler Ambiguity Resolution for Wideband Terahertz Radar Using Intra-Pulse Interference
Yang, Qi; Qin, Yuliang; Deng, Bin; Wang, Hongqiang; You, Peng
2017-01-01
Micro-Doppler, induced by micro-motion of targets, is an important characteristic of target recognition once extracted via parameter estimation methods. However, micro-Doppler is usually too significant to result in ambiguity in the terahertz band because of its relatively high carrier frequency. Thus, a micro-Doppler ambiguity resolution method for wideband terahertz radar using intra-pulse interference is proposed in this paper. The micro-Doppler can be reduced several dozen times its true value to avoid ambiguity through intra-pulse interference processing. The effectiveness of this method is proved by experiments based on a 0.22 THz wideband radar system, and its high estimation precision and excellent noise immunity are verified by Monte Carlo simulation. PMID:28468257
Consistent use of the standard model effective potential.
Andreassen, Anders; Frost, William; Schwartz, Matthew D
2014-12-12
The stability of the standard model is determined by the true minimum of the effective Higgs potential. We show that the potential at its minimum when computed by the traditional method is strongly dependent on the gauge parameter. It moreover depends on the scale where the potential is calculated. We provide a consistent method for determining absolute stability independent of both gauge and calculation scale, order by order in perturbation theory. This leads to a revised stability bounds m(h)(pole)>(129.4±2.3) GeV and m(t)(pole)<(171.2±0.3) GeV. We also show how to evaluate the effect of new physics on the stability bound without resorting to unphysical field values.
Micro-Doppler Ambiguity Resolution for Wideband Terahertz Radar Using Intra-Pulse Interference.
Yang, Qi; Qin, Yuliang; Deng, Bin; Wang, Hongqiang; You, Peng
2017-04-29
Micro-Doppler, induced by micro-motion of targets, is an important characteristic of target recognition once extracted via parameter estimation methods. However, micro-Doppler is usually too significant to result in ambiguity in the terahertz band because of its relatively high carrier frequency. Thus, a micro-Doppler ambiguity resolution method for wideband terahertz radar using intra-pulse interference is proposed in this paper. The micro-Doppler can be reduced several dozen times its true value to avoid ambiguity through intra-pulse interference processing. The effectiveness of this method is proved by experiments based on a 0.22 THz wideband radar system, and its high estimation precision and excellent noise immunity are verified by Monte Carlo simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, L.E.; Brown, J.R.
1977-01-01
Parameters of the mating call of spring peepers (Hyla crucifer) were best predicted by water temperature rather than air or body temperature. Thus, water temperature should most closely approach the true body temperature of the calling frogs.
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Linear system identification via backward-time observer models
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1992-01-01
Presented here is an algorithm to compute the Markov parameters of a backward-time observer for a backward-time model from experimental input and output data. The backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) for the backward-time system identification. The identified backward-time system Markov parameters are used in the Eigensystem Realization Algorithm to identify a backward-time state-space model, which can be easily converted to the usual forward-time representation. If one reverses time in the model to be identified, what were damped true system modes become modes with negative damping, growing as the reversed time increases. On the other hand, the noise modes in the identification still maintain the property that they are stable. The shift from positive damping to negative damping of the true system modes allows one to distinguish these modes from noise modes. Experimental results are given to illustrate when and to what extent this concept works.
2017-01-01
Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589
JPRS Report, Soviet Union. Kommunist, No. 5, March 1987
1987-07-13
discipline and order and increasing openness, criticism and self-criticism; it means respect for the values and dignity of the individual. True democracy...conditions of true democracy and to develop it comprehensively and find new forms of its manifestation is impossible without knowledge of the law which...new society one of true democracy and greatest possible morality and culture, ruled by justice, equality and the law. The inviolable nature of these
De Bondt, Pieter; Nichols, Kenneth; Vandenberghe, Stijn; Segers, Patrick; De Winter, Olivier; Van de Wiele, Christophe; Verdonck, Pascal; Shazad, Arsalan; Shoyeb, Abu H; De Sutter, Johan
2003-06-01
We have developed a biventricular dynamic physical cardiac phantom to test gated blood-pool (GBP) SPECT image-processing algorithms. Such phantoms provide absolute values against which to assess accuracy of both right and left computed ventricular volume and ejection fraction (EF) measurements. Two silicon-rubber chambers driven by 2 piston pumps simulated crescent-shaped right ventricles wrapped partway around ellopsoid left ventricles. Twenty experiments were performed at Ghent University, for which right and left ventricular true volume and EF ranges were 65-275 mL and 55-165 mL and 7%-49% and 12%-69%, respectively. Resulting 64 x 64 simulated GBP SPECT images acquired at 16 frames per R-R interval were sent to Columbia University, where 2 observers analyzed images independently of each other, without knowledge of true values. Algorithms automatically segmented right ventricular activity volumetrically from left ventricular activity. Automated valve planes, midventricular planes, and segmentation regions were presented to observers, who accepted these choices or modified them as necessary. One observer repeated measurements >1 mo later without reference to previous determinations. Linear correlation coefficients (r) of the mean of the 3 GBP SPECT observations versus true values for right and left ventricles were 0.80 and 0.94 for EF and 0.94 and 0.95 for volumes, respectively. Correlations for right and left ventricles were 0.97 and 0.97 for EF and 0.96 and 0.89 for volumes, respectively, for interobserver agreement and 0.97 and 0.98 for EF and 0.96 and 0.90 for volumes, respectively, for intraobserver agreement. No trends were detected, though volumes and right ventricular EFs were significantly higher than true values. Overall, GBP SPECT measurements correlated strongly with true values. The phantom evaluated shows considerable promise for helping to guide algorithm developments for improved GBP SPECT accuracy.
NASA Technical Reports Server (NTRS)
Reginald, Nelson L.; Davilla, Joseph M.; St. Cyr, O. C.; Rastaetter, Lutz
2014-01-01
We examine the uncertainties in two plasma parameters from their true values in a simulated asymmetric corona. We use the Corona Heliosphere (CORHEL) and Magnetohydrodynamics Around the Sphere (MAS) models in the Community Coordinated Modeling Center (CCMC) to investigate the differences between an assumed symmetric corona and a more realistic, asymmetric one. We were able to predict the electron temperatures and electron bulk flow speeds to within +/-0.5 MK and +/-100 km s(exp-1), respectively, over coronal heights up to 5.0 R from Sun center.We believe that this technique could be incorporated in next-generation white-light coronagraphs to determine these electron plasma parameters in the low solar corona. We have conducted experiments in the past during total solar eclipses to measure the thermal electron temperature and the electron bulk flow speed in the radial direction in the low solar corona. These measurements were made at different altitudes and latitudes in the low solar corona by measuring the shape of the K-coronal spectra between 350 nm and 450 nm and two brightness ratios through filters centered at 385.0 nm/410.0 nm and 398.7 nm/423.3 nm with a bandwidth of is approximately equal to 4 nm. Based on symmetric coronal models used for these measurements, the two measured plasma parameters were expected to represent those values at the points where the lines of sight intersected the plane of the solar limb.
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Zhang, X Y; Li, H; Zhao, Y J; Wang, Y; Sun, Y C
2016-07-01
To quantitatively evaluate the quality and accuracy of three-dimensional (3D) data acquired by using two kinds of structure intra-oral scanner to scan the typical teeth crown preparations. Eight typical teeth crown preparations model were scanned 3 times with two kinds of structured light intra-oral scanner(A, B), as test group. A high precision model scanner were used to scan the model as true value group. The data above the cervical margin was extracted. The indexes of quality including non-manifold edges, the self-intersections, highly-creased edges, spikes, small components, small tunnels, small holes and the anount of triangles were measured with the tool of mesh doctor in Geomagic studio 2012. The scanned data of test group were aligned to the data of true value group. 3D deviations of the test group compared with true value group were measured for each scanned point, each preparation and each group. Independent-samples Mann-Whitney U test was applied to analyze 3D deviations for each scanned point of A and B group. Correlation analysis was applied to index values and 3D deviation values. The total number of spikes in A group was 96, and that in B group and true value group were 5 and 0 respectively. Trueness: A group 8.0 (8.3) μm, B group 9.5 (11.5) μm(P>0.05). Correlation analysis of the number of spikes with data precision of A group was r=0.46. In the study, the qulity of the scanner B is better than scanner A, the difference of accuracy is not statistically significant. There is correlation between quality and data precision of the data scanned with scanner A.
Redefining Health: Implication for Value-Based Healthcare Reform.
Putera, Ikhwanuliman
2017-03-02
Health definition consists of three domains namely, physical, mental, and social health that should be prioritized in delivering healthcare. The emergence of chronic diseases in aging populations has been a barrier to the realization of a healthier society. The value-based healthcare concept seems in line with the true health objective: increasing value. Value is created from health outcomes which matter to patients relative to the cost of achieving those outcomes. The health outcomes should include all domains of health in a full cycle of care. To implement value-based healthcare, transformations need to be done by both health providers and patients: establishing true health outcomes, strengthening primary care, building integrated health systems, implementing appropriate health payment schemes that promote value and reduce moral hazards, enabling health information technology, and creating a policy that fits well with a community.
The effects of clutter-rejection filtering on estimating weather spectrum parameters
NASA Technical Reports Server (NTRS)
Davis, W. T.
1989-01-01
The effects of clutter-rejection filtering on estimating the weather parameters from pulse Doppler radar measurement data are investigated. The pulse pair method of estimating the spectrum mean and spectrum width of the weather is emphasized. The loss of sensitivity, a measure of the signal power lost due to filtering, is also considered. A flexible software tool developed to investigate these effects is described. It allows for simulated weather radar data, in which the user specifies an underlying truncated Gaussian spectrum, as well as for externally generated data which may be real or simulated. The filter may be implemented in either the time or the frequency domain. The software tool is validated by comparing unfiltered spectrum mean and width estimates to their true values, and by reproducing previously published results. The effects on the weather parameter estimates using simulated weather-only data are evaluated for five filters: an ideal filter, two infinite impulse response filters, and two finite impulse response filters. Results considering external data, consisting of weather and clutter data, are evaluated on a range cell by range cell basis. Finally, it is shown theoretically and by computer simulation that a linear phase response is not required for a clutter rejection filter preceeding pulse-pair parameter estimation.
Suitability of the AUC Ratio as an Indicator of the Pharmacokinetic Advantage in HIPEC.
Mas-Fuster, Maria Isabel; Ramon-Lopez, Amelia; Lacueva, Javier; Más-Serrano, Patricio; Nalda-Molina, Ricardo
2018-02-01
The purpose of this study was to evaluate the area under the concentration-time curve (AUC) ratio as an optimal indicator of the pharmacokinetic advantage during hyperthermic intraperitoneal perioperative chemotherapy. The impact on the AUC ratio on the variables related to the calculation of systemic drug exposure, instillation time, and peripheral drug distribution was evaluated through simulations as well as through a retrospective analysis of studies published in the literature. Both model simulations and the retrospective analysis showed that the 3 variables evaluated had an impact on the AUC ratio value if the complete systemic exposure was not fully considered. However, when that complete systemic exposure was considered, none of these variables affected the AUC ratio value. AUC ratio is not a characteristic parameter of a drug if the calculated systemic drug exposure is not complete. Thus, AUC ratio is not valid for comparing the pharmacokinetic advantage of 2 drugs, and it should not be employed to prove whether a drug can be used in hyperthermic intraperitoneal perioperative chemotherapy safely with regard to toxicity. As an alternative, the study of the absorption rate constant and the bioavailability are proposed as the true and independent parameters that reflect the amount of drug absorbed. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
2017-12-01
values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression
Lee, Byung Bo; Yang, Tae Sung; Goo, Doyun; Choi, Hyeon Seok; Pitargue, Franco Martinez; Jung, Hyunjung; Kil, Dong Yong
2018-04-01
This experiment was conducted to determine the effects of dietary β-mannanase on the additivity of true metabolizable energy (TME) and nitrogen-corrected true metabolizable energy (TME n ) for broiler diets. A total of 144 21-day-old broilers were randomly allotted to 12 dietary treatments with 6 replicates. Five treatments consisted of 5 ingredients of corn, wheat, soybean meal, corn distillers dried grains with solubles, or corn gluten meal. One mixed diet containing 200 g/kg of those 5 ingredients also was prepared. Additional 6 treatments were prepared by mixing 0.5 g/kg dietary β-mannanase with those 5 ingredients and the mixed diet. Based on a precision-fed chicken assay, TME and TME n values for 5 ingredients and the mixed diet as affected by dietary β-mannanase were determined. Results indicated that when β-mannanase was not added to the diet, measured TME and TME n values for the diet did not differ from the predicted values for the diet, which validated the additivity. However, for the diet containing β-mannanase, measured TME n value was greater (p<0.05) than predicted TME n value, indicating that the additivity was not validated. In conclusion, the additivity of energy values for the mixed diet may not be guaranteed if the diet contains β-mannanase.
Lee, Byung Bo; Yang, Tae Sung; Goo, Doyun; Choi, Hyeon Seok; Pitargue, Franco Martinez; Jung, Hyunjung; Kil, Dong Yong
2018-01-01
Objective This experiment was conducted to determine the effects of dietary β-mannanase on the additivity of true metabolizable energy (TME) and nitrogen-corrected true metabolizable energy (TMEn) for broiler diets. Methods A total of 144 21-day-old broilers were randomly allotted to 12 dietary treatments with 6 replicates. Five treatments consisted of 5 ingredients of corn, wheat, soybean meal, corn distillers dried grains with solubles, or corn gluten meal. One mixed diet containing 200 g/kg of those 5 ingredients also was prepared. Additional 6 treatments were prepared by mixing 0.5 g/kg dietary β-mannanase with those 5 ingredients and the mixed diet. Based on a precision-fed chicken assay, TME and TMEn values for 5 ingredients and the mixed diet as affected by dietary β-mannanase were determined. Results Results indicated that when β-mannanase was not added to the diet, measured TME and TMEn values for the diet did not differ from the predicted values for the diet, which validated the additivity. However, for the diet containing β-mannanase, measured TMEn value was greater (p<0.05) than predicted TMEn value, indicating that the additivity was not validated. Conclusion In conclusion, the additivity of energy values for the mixed diet may not be guaranteed if the diet contains β-mannanase. PMID:29381897
Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.
Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan
2013-02-01
A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Wang, Y
Purpose: Due to limited commissioning time, we previously only released our True beam non-FFF mode for prostate treatment. Clinical demand now pushes us to release the non-FFF mode for SRT/SBRT treatment. When re-planning on True beam previously treated SRT/SBRT cases on iX machine we found the patient specific QA pass rate was worse than iX’s, though the 2Gy/fx prostate Result had been as good. We hypothesize that in TPS the True beam DLG and MLC transmission values, of those measured during commissioning could not yet provide accurate SRS/SBRT dosimetry. Hence this work is to investigate how the TPS DLG andmore » transmission value affects Rapid Arc plans’ dosimetric accuracy. Methods: We increased DLG and transmission value of True beam in TPS such that their percentage differences against the measured matched those of iX’s. We re-calculated 2 SRT, 1 SBRT and 2 prostate plans, performed patient specific QA on these new plans and compared the results to the previous. Results: With DLG and transmission value set respectively 40 and 8% higher than the measured, the patient specific QA pass rate (at 3%/3mm) improved from 95.0 to 97.6% vs previous iX’s 97.8% in the case of SRT. In the case of SBRT, the pass rate improved from 75.2 to 93.9% vs previous iX’s 92.5%. In the case of prostate, the pass rate improved from 99.3 to 100%. The maximum dose difference in plans before and after adjusting DLG and transmission was approximately 1% of the prescription dose among all plans. Conclusion: The impact of adjusting DLG and transmission value on dosimetry might be the same among all Rapid Arc plans regardless hypofractionated or not. The large variation observed in patient specific QA pass rate might be due to the data analysis method in the QA software being more sensitive to hypofractionated plans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Y; Fuller, C; Mohamed, A
2015-06-15
Purpose: Many published studies have recently demonstrated the potential value of intravoxel incoherent motion (IVIM) analysis for disease evaluation. However, few have questioned its measurement repeatability/reproducibility when applied. The purpose of this study was to determine the short-term measurement repeatability of apparent diffusion coefficient ADC, true diffusion coefficient D, pseudodiffusion coefficient D* and perfusion fraction f, in head and neck squamous cell carcinoma (HNSCC) primary tumors and metastatic nodes. Methods: Ten patients with known HNSCC were examined twice using echo-planar DW-MRI with 12 b values (0 to 800 s/mm2) 1hour to 24 hours apart before radiation treatment. All patients weremore » scanned with the customized radiation treatment immobilization devices to reduce motion artifacts and to improve image registration in repeat scans. Regions of interests were drawn in primary tumor and metastases node in each patient (Fig. 1). ADC and IVIM parameters D, D* and f were calculated by least squares data fitting. Short-term test–retest repeatability of ADC and IVIM parameters were assessed by measuring Bland–Altman limits of agreements (BA-LA). Results: Sixteen HNSCC lesions were assessed in 10 patients. Repeatability of perfusion-sensitive parameters, D* and f, in HNSCC lesions was poor (BA-LA: -144% to 88% and −57% to 96% for D* and f, respectively); a lesser extent was observed for the diffusion-sensitive parameters of ADC and D (BA-LA: −34% to 39% and −37% to 40%, for ADC and D, respectively) (Fig. 2). Conclusion: Poor repeatability of D*/f and good repeatability for ADC/D were observed in HNSCC primary tumors and metastatic nodes. Efforts should be made to improve the measurement repeatability of perfusion-sensitive IVIM parameters.« less
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
Statistical fusion of continuous labels: identification of cardiac landmarks
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.
2011-03-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A
2011-01-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, A
Purpose: Novel linac machines, TrueBeam (TB) and Elekta Versa have updated head designing and software control system, include flattening-filter-free (FFF) photon and electron beams. Later on FFF beams were also introduced on C-Series machines. In this work FFF beams for same energy 6MV but from different machine versions were studied with reference to beam data parameters. Methods: The 6MV-FFF percent depth doses, profile symmetry and flatness, dose rate tables, and multi-leaf collimator (MLC) transmission factors were measured during commissioning process of both C-series and Truebeam machines. The scanning and dosimetric data for 6MV-FFF beam from Truebeam and C-Series linacs wasmore » compared. A correlation of 6MV-FFF beam from Elekta Versa with that of Varian linacs was also found. Results: The scanning files were plotted for both qualitative and quantitative analysis. The dosimetric leaf gap (DLG) for C-Series 6MV-FFF beam is 1.1 mm. Published values for Truebeam dosimetric leaf gap is 1.16 mm. 6MV MLC transmission factor varies between 1.3 % and 1.4 % in two separate measurements and measured DLG values vary between 1.32 mm and 1.33 mm on C-Series machine. MLC transmission factor from C-Series machine varies between 1.5 % and 1.6 %. Some of the measured data values from C-Series FFF beam are compared with Truebeam representative data. 6MV-FFF beam parameter values like dmax, OP factors, beam symmetry and flatness and additional parameters for C-Series and Truebeam liancs will be presented and compared in graphical form and tabular data form if selected. Conclusion: The 6MV flattening filter (FF) beam data from C-Series & Truebeam and 6MV-FFF beam data from Truebeam has already presented. This particular analysis to compare 6MV-FFF beam from C-Series and Truebeam provides opportunity to better elaborate FFF mode on novel machines. It was found that C-Series and Truebeam 6MV-FFF dosimetric and beam data was quite similar.« less
Deriving and Constraining 3D CME Kinematic Parameters from Multi-Viewpoint Coronagraph Images
NASA Astrophysics Data System (ADS)
Thompson, B. J.; Mei, H. F.; Barnes, D.; Colaninno, R. C.; Kwon, R.; Mays, M. L.; Mierla, M.; Moestl, C.; Richardson, I. G.; Verbeke, C.
2017-12-01
Determining the 3D properties of a coronal mass ejection using multi-viewpoint coronagraph observations can be a tremendously complicated process. There are many factors that inhibit the ability to unambiguously identify the speed, direction and shape of a CME. These factors include the need to separate the "true" CME mass from shock-associated brightenings, distinguish between non-radial or deflected trajectories, and identify asymmetric CME structures. Additionally, different measurement methods can produce different results, sometimes with great variations. Part of the reason for the wide range of values that can be reported for a single CME is due to the difficulty in determining the CME's longitude since uncertainty in the angle of the CME relative to the observing image planes results in errors in the speed and topology of the CME. Often the errors quoted in an individual study are remarkably small when compared to the range of values that are reported by different authors for the same CME. For example, two authors may report speeds of 700 +- 50 km/sec and 500+-50 km/sec for the same CME. Clearly a better understanding of the accuracy of CME measurements, and an improved assessment of the limitations of the different methods, would be of benefit. We report on a survey of CME measurements, wherein we compare the values reported by different authors and catalogs. The survey will allow us to establish typical errors for the parameters that are commonly used as inputs for CME propagation models such as ENLIL and EUHFORIA. One way modelers handle inaccuracies in CME parameters is to use an ensemble of CMEs, sampled across ranges of latitude, longitude, speed and width. The CMEs simulated in order to determine the probability of a "direct hit" and, for the cases with a "hit," derive a range of possible arrival times. Our study will provide improved guidelines for generating CME ensembles that more accurately sample across the range of plausible values.
2010-06-01
z ∃ dom(Y) • true A link is denoted by a function linki (X, Y) where X and Y are formal parameters representing entities and is evaluated as true or...X/rc ∃ dom(Y), • linki (Y, Z), and • τ(X)/r:c ∃ fi(τ(Y), τ(Z)). The addition of the filter, made possible by the protection types, distinguishes the
Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.
Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2012-11-08
A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.
Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models
NASA Astrophysics Data System (ADS)
Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel
2014-07-01
We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.
Stewart, Jessica N; McGillivray, David; Sussman, John; Foster, Bethany
2008-10-01
Blood pressure (BP) is measured at triage in most emergency departments (EDs). We aimed to determine the value of triage BP in diagnosing hypotension and true hypertension in children age > or =3 years presenting with nonurgent problems. In this prospective study, eligible children underwent automated BP measurement at triage. If BP was elevated, then the measurement was repeated manually. Children with a high manual BP were followed. True hypertension was defined as a manual BP >95th percentile for sex, age, and height measured on 3 occasions. Automated triage BP was measured in 549 children (53.4% male; mean age, 9.4 +/- 4.3 years) and was found to be elevated in 144 of them (26%). No child was hypotensive. Among the 495 patients with complete follow-up, the specificity and positive predictive value (PPV) of elevated triage BP in diagnosing true hypertension were 81.8% and 0%, respectively. A sensitivity analysis including those with incomplete follow-up, in which the population prevalence of true hypertension was assumed to be 1% to 2%, resulted in a specificity of 74.5% to 75.3% and a PPV of 3.8% to 7.5%. The yield of measuring BP at triage in children with nonurgent problems appears to be extremely low.
Geophysical Assessment of Groundwater Potential: A Case Study from Mian Channu Area, Pakistan.
Hasan, Muhammad; Shang, Yanjun; Akhter, Gulraiz; Jin, Weijun
2017-11-17
An integrated study using geophysical method in combination with pumping tests and geochemical method was carried out to delineate groundwater potential zones in Mian Channu area of Pakistan. Vertical electrical soundings (VES) using Schlumberger configuration with maximum current electrode spacing (AB/2 = 200 m) were conducted at 50 stations and 10 pumping tests at borehole sites were performed in close proximity to 10 of the VES stations. The aim of this study is to establish a correlation between the hydraulic parameters obtained from geophysical method and pumping tests so that the aquifer potential can be estimated from the geoelectrical surface measurements where no pumping tests exist. The aquifer parameters, namely, transmissivity and hydraulic conductivity were estimated from Dar Zarrouyk parameters by interpreting the layer parameters such as true resistivities and thicknesses. Geoelectrical succession of five-layer strata (i.e., topsoil, clay, clay sand, sand, and sand gravel) with sand as a dominant lithology was found in the study area. Physicochemical parameters interpreted by World Health Organization and Food and Agriculture Organization were well correlated with the aquifer parameters obtained by geoelectrical method and pumping tests. The aquifer potential zones identified by modeled resistivity, Dar Zarrouk parameters, pumped aquifer parameters, and physicochemical parameters reveal that sand and gravel sand with high values of transmissivity and hydraulic conductivity are highly promising water bearing layers in northwest of the study area. Strong correlation between estimated and pumped aquifer parameters suggest that, in case of sparse well data, geophysical technique is useful to estimate the hydraulic potential of the aquifer with varying lithology. © 2017, National Ground Water Association.
NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A. L.; Morsink, S. M.; Fiege, J. D.
The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less
NASA Astrophysics Data System (ADS)
Hayat, Asma; Bashir, Shazia; Rafique, Muhammad Shahid; Ahmad, Riaz; Akram, Mahreen; Mahmood, Khaliq; Zaheer, Ali
2017-12-01
Spatial confinement effects on plasma parameters and surface morphology of laser ablated Zr (Zirconium) are studied by introducing a metallic blocker. Nd:YAG laser at various fluencies ranging from 8 J cm-2 to 32 J cm-2 was employed as an irradiation source. All measurements were performed in the presence of Ar under different pressures. Confinement effects offered by metallic blocker are investigated by placing the blocker at different distances of 6 mm, 8 mm and 10 mm from the target surface. It is revealed from LIBS analysis that both plasma parameters i.e. excitation temperature and electron number density increase with increasing laser fluence due to enhancement in energy deposition. It is also observed that spatial confinement offered by metallic blocker is responsible for the enhancement of both electron temperature and electron number density of Zr plasma. This is true for all laser fluences and pressures of Ar. Maximum values of electron temperature and electron number density without blocker are 12,600 K and 14 × 1017 cm-3 respectively whereas, these values are enhanced to 15,000 K and 21 × 1017 cm-3 in the presence of blocker. The physical mechanisms responsible for the enhancement of Zr plasma parameters are plasma compression, confinement and pronounced collisional excitations due to reflection of shock waves. Scanning Electron Microscope (SEM) analysis was performed to explore the surface morphology of laser ablated Zr. It reveals the formation of cones, cavities and ripples. These features become more distinct and well defined in the presence of blocker due to plasma confinement. The optimum combination of blocker distance, fluence and Ar pressure can identify the suitable conditions for defining the role of plasma parameters for surface structuring.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Quantitative Studies of the Optical and UV Spectra of Galactic Early B Supergiants
NASA Technical Reports Server (NTRS)
Searle, S. C.; Prinja, R. K.; Massa, D.; Ryans, R.
2008-01-01
We undertake an optical and ultraviolet spectroscopic analysis of a sample of 20 Galactic B0-B5 supergiants of luminosity classes Ia, Ib, Iab, and II. Fundamental stellar parameters are obtained from optical diagnostics and a critical comparison of the model predictions to observed UV spectral features is made. Methods. Fundamental parameters (e.g., T(sub eff), log L(sub *), mass-loss rates and CNO abundances) are derived for individual stars using CMFGEN, a nLTE, line-blanketed model atmosphere code. The impact of these newly derived parameters on the Galactic B supergiant Ten scale, mass discrepancy, and wind-momentum luminosity relation is examined. Results. The B supergiant temperature scale derived here shows a reduction of about 1000-3000 K compared to previous results using unblanketed codes. Mass-loss rate estimates are in good agreement with predicted theoretical values, and all of the 20 BO-B5 supergiants analysed show evidence of CNO processing. A mass discrepancy still exists between spectroscopic and evolutionary masses, with the largest discrepancy occuring at log (L/(solar)L approx. 5.4. The observed WLR values calculated for B0-B0.7 supergiants are higher than predicted values, whereas the reverse is true for B1-B5 supergiants. This means that the discrepancy between observed and theoretical values cannot be resolved by adopting clumped (i.e., lower) mass-loss rates as for O stars. The most surprising result is that, although CMFGEN succeeds in reproducing the optical stellar spectrum accurately, it fails to precisely reproduce key UV diagnostics, such as the N v and C IV P Cygni profiles. This problem arises because the models are not ionised enough and fail to reproduce the full extent of the observed absorption trough of the P Cygni profiles. Conclusions. Newly-derived fundamental parameters for early B supergiants are in good agreement with similar work in the field. The most significant discovery, however, is the failure of CMFGEN to predict the correct ionisation fraction for some ions. Such findings add further support to revising the current standard model of massive star winds, as our understanding of these winds is incomplete without a precise knowledge of the ionisation structure and distribution of clumping in the wind. Key words. techniques: spectroscopic - stars: mass-loss - stars: supergiants - stars: abundances - stars: atmospheres - stars: fundamental parameters
NASA Astrophysics Data System (ADS)
Subramanian, Tenkasi R.
In the current day, with the rapid advancement in technology, engineering design is growing in complexity. Nowadays, engineers have to deal with design problems that are large, complex and involving multi-level decision analyses. With the increase in complexity and size of systems, the production and development cost tend to overshoot the allocated budget and resources. This often results in project delays and project cancellation. This is particularly true for aerospace systems. Value Driven Design proves to be means to strengthen the design process and help counter such trends. Value Driven is a novel framework for optimization which puts stakeholder preferences at the forefront of the design process to capture their true preferences to present system alternatives that are consistent the stakeholder's expectations. Traditional systems engineering techniques promote communication of stakeholder preferences in the form of requirements which confines the design space by imposing additional constraints on it. This results in a design that does not capture the true preferences of the stakeholder. Value Driven Design provides an alternate approach to design wherein a value function is created that corresponds to the true preferences of the stakeholder. The applicability of VDD broad, but it is imperative to first explore its feasibility to ensure the development of an efficient, robust and elegant system design. The key to understanding the usability of VDD is to investigate the formation, propagation and use of a value function. This research investigates the use of rank correlation metrics to ensure consistent rank ordering of design alternatives, while investigating the fidelity of the value function. The impact of design uncertainties on rank ordering. A satellite design system consisting of a satellite, ground station and launch vehicle is used to demonstrate the use of the metrics to aid in decision support during the design process.
IVIM diffusion-weighted imaging of the liver at 3.0 T: Comparison with 1.5 T
Cui, Yong; Dyvorne, Hadrien; Besa, Cecilia; Cooper, Nancy; Taouli, Bachir
2015-01-01
Purpose To compare intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) of the liver between 1.5 T and 3.0 T in terms of parameter quantification and inter-platform reproducibility. Materials and methods In this IRB approved prospective study, 19 subjects (17 patients with chronic liver disease and 2 healthy volunteers) underwent two repeat scans at 1.5 T and 3.0 T. Each scan included IVIM DWI using 16 b values from 0 to 800 s/mm2. A single observer measured IVIM parameters for each platform and estimated signal to noise ratio (eSNR) at b0, 200, 400 and 800 s/mm2. Wilcoxon paired tests were used to compare liver eSNR and IVIM parameters. Inter-platform reproducibility was assessed by calculating within-subject coefficient of variation (CV) and Bland–Altman limits of agreement. An ice water phantom was used to test ADC variability between the two MRI systems. Results The mean invitro difference in ADC between the two platforms was 6.8%. eSNR was significantly higher at 3.0T for all selected b values (p = 0.006–0.020), except for b0 (p = 0.239). Liver IVIM parameters were significantly different between 1.5 T and 3.0 T (p = 0.005–0.044), except for ADC (p = 0.748). The inter-platform reproducibility of true diffusion coefficient (D) and ADC were good, with mean CV of 10.9% and 11.1%, respectively. Perfusion fraction (PF) and pseudodiffusion coefficient (D*) showed more limited inter-platform reproducibility (mean CV of 22.6% for PF and 46.9% for D*). Conclusion Liver D and ADC values showed good reproducibility between 1.5 T and 3.0 T platforms; while there was more variability in PF, and large variability in D* parameters between the two platforms. These findings may have implications for drug trials assessing the role of IVIM DWI in tumor response and liver fibrosis. PMID:26393236
A Nickel Ain't Worth a Dime Anymore: The Illusion of Money and the Rapid Encoding of Its True Value
2013-01-01
People often evaluate money based on its face value and overlook its real purchasing power, known as the money illusion. For example, the same 100 Chinese Yuan can buy many more goods in Tibet than in Beijing, but such difference in buying power is usually underestimated. Using event related potential combined with a gambling task, we sought to investigate the encoding of both the real value and the face value of money in the human brain. We found that the self-reported pleasantness of outcomes was modulated by both values. The feedback related negativity (FRN), which peaks around 250ms after feedback and is believed to be generated in the anterior cingulate cortex (ACC), was only modulated by the true value but not the face value of money. We conclude that the real value of money is rapidly encoded in the human brain even when participants exhibit the money illusion at the behavioral level. PMID:23383044
A nickel ain't worth a dime anymore: the illusion of money and the rapid encoding of its true value.
Yu, Rongjun; Huang, Yi
2013-01-01
People often evaluate money based on its face value and overlook its real purchasing power, known as the money illusion. For example, the same 100 Chinese Yuan can buy many more goods in Tibet than in Beijing, but such difference in buying power is usually underestimated. Using event related potential combined with a gambling task, we sought to investigate the encoding of both the real value and the face value of money in the human brain. We found that the self-reported pleasantness of outcomes was modulated by both values. The feedback related negativity (FRN), which peaks around 250ms after feedback and is believed to be generated in the anterior cingulate cortex (ACC), was only modulated by the true value but not the face value of money. We conclude that the real value of money is rapidly encoded in the human brain even when participants exhibit the money illusion at the behavioral level.
Accuracy and precision of as-received implant torque wrenches.
Britton-Vidal, Eduardo; Baker, Philip; Mettenburg, Donald; Pannu, Darshanjit S; Looney, Stephen W; Londono, Jimmy; Rueggeberg, Frederick A
2014-10-01
Previous implant torque evaluation did not determine if the target value fell within a confidence interval for the population mean of the test groups, disallowing determination of whether a specific type of wrench met a standardized goal value. The purpose of this study was to measure both the accuracy and precision of 2 different configurations (spring style and peak break) of as-received implant torque wrenches and compare the measured values to manufacturer-stated values. Ten wrenches from 4 manufacturers, representing a variety of torque-limiting mechanisms and specificity of use (with either a specific brand or universally with any brand of implant product). Drivers were placed into the wrench, and tightening torque was applied to reach predetermined values using a NIST-calibrated digital torque wrench. Five replications of measurement were made for each wrench and averaged to provide a single value from that instrument. The target torque value for each wrench brand was compared to the 95% confidence interval for the true population mean of measured values to see if it fell within the measured range. Only 1 wrench brand (Nobel Biocare) demonstrated the target torque value falling within the 95% confidence interval for the true population mean. For others, the targeted torque value fell above the 95% confidence interval (Straumann and Imtec) or below (Salvin Torq). Neither type of torque-limiting mechanism nor designation of a wrench to be used as a dedicated brand-only product or to be used as a universal product on many brands affected the ability of a wrench to deliver torque values where the true population mean included the target torque level. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Graph reconstruction using covariance-based methods.
Sulaimanov, Nurgazy; Koeppl, Heinz
2016-12-01
Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.
van Leth, Frank; den Heijer, Casper; Beerepoot, Mariëlle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance
2017-04-01
Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates from surveys of community-acquired urinary tract infection in women, by assessing operating curves, sensitivity and specificity. Sensitivity and specificity of any set of LQAS parameters was above 99% and between 79 and 90%, respectively. Operating curves showed high concordance of the LQAS classification with true AMR prevalence estimates. LQAS-based AMR surveillance is a feasible approach that provides timely and locally relevant estimates, and the necessary information to formulate and evaluate guidelines for empirical treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otageri, P; Grant, E; Maricle, S
Purpose: To evaluate the effects of MLC modeling after commissioning the Varian TrueBeam LINAC in Pinnacle version 9.2. Methods: Stepand-shoot IMRT QAs were investigated when we observed our measured absolute dose results using ion chamber (Capintec PR-05P) were uncharacteristically low; about 4–5% compared to doses calculated by Pinnacle{sup 3} (Phillips, Madison, WI). This problem was predominant for large and highly modulated head and neck (HN) treatments. Intuitively we knew this had to be related to shortcomings in the MLC modeling in Pinnacle. Using film QA we were able to iteratively adjust the MLC parameters. We confirmed results by re-testing fivemore » failed IMRT QA patients; and ion chamber measurements were verified in Quasar anthropomorphic phantom. Results: After commissioning the LINAC in Pinnacle version 9.2, the MLC transmission for 6X, 10X and 15X were 2.0%, 1.7% and 2.0%, respectively, and additional Interleaf leakage for all three energies was 0.5%. These parameters were obtained from profiles scanned with an Edge detector (Sun Nuclear, Melbourne, FL) during machine commissioning. A Verification testing with radiographic EDR2 film (Kodak, Rochester, NY) measurement was performed by creating a closed MLC leaf pattern and analyzing using RIT software (RIT, Colorado Springs, CO). This reduced MLC transmission for 6X, 10X and 15X to 0.7%, 0.9% and 0.9%, respectively; while increasing additional Interleaf leakage for all three energies to 1.0%. Conclusion: Radiographic film measurements were used to correct MLC transmission values for step and shoot IMRT fields used in Pinnacle version 9.2. After adjusting the MLC parameters to correlate with the film QA, there was still very good agreement between the Pinnacle model and commissioning data. Using the same QA methodology, we were also able to improve the beam models for the Varian C-series linacs, Novalis-Tx, and TrueBeam M-120 linacs.« less
Identifying isotropic events using a regional moment tensor inversion
Ford, Sean R.; Dreger, Douglas S.; Walter, William R.
2009-01-17
We calculate the deviatoric and isotropic source components for 17 explosions at the Nevada Test Site, as well as 12 earthquakes and 3 collapses in the surrounding region of the western United States, using a regional time domain full waveform inversion for the complete moment tensor. The events separate into specific populations according to their deviation from a pure double-couple and ratio of isotropic to deviatoric energy. The separation allows for anomalous event identification and discrimination between explosions, earthquakes, and collapses. Confidence regions of the model parameters are estimated from the data misfit by assuming normally distributed parameter values. Wemore » investigate the sensitivity of the resolved parameters of an explosion to imperfect Earth models, inaccurate event depths, and data with low signal-to-noise ratio (SNR) assuming a reasonable azimuthal distribution of stations. In the band of interest (0.02–0.10 Hz) the source-type calculated from complete moment tensor inversion is insensitive to velocity model perturbations that cause less than a half-cycle shift (<5 s) in arrival time error if shifting of the waveforms is allowed. The explosion source-type is insensitive to an incorrect depth assumption (for a true depth of 1 km), and the goodness of fit of the inversion result cannot be used to resolve the true depth of the explosion. Noise degrades the explosive character of the result, and a good fit and accurate result are obtained when the signal-to-noise ratio is greater than 5. We assess the depth and frequency dependence upon the resolved explosive moment. As the depth decreases from 1 km to 200 m, the isotropic moment is no longer accurately resolved and is in error between 50 and 200%. Furthermore, even at the most shallow depth the resultant moment tensor is dominated by the explosive component when the data have a good SNR.« less
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
NASA Astrophysics Data System (ADS)
Hamid, Aamir; Hashim; Khan, Masood
2018-06-01
The main concern of this communication is to investigate the two-layer flow of a non-Newtonian rheological fluid past a wedge-shaped geometry. One remarkable aspect of this article is the mathematical formulation for two-dimensional flow of Williamson fluid by incorporating the effect of infinite shear rate viscosity. The impacts of heat transfer mechanism on time-dependent flow field are further studied. At first, we employ the suitable non-dimensional variables to transmute the time-dependent governing flow equations into a system of non-linear ordinary differential equations. The converted conservation equations are numerically integrated subject to physically suitable boundary conditions with the aid of Runge-Kutta Fehlberg integration procedure. The effects of involved pertinent parameters, such as, moving wedge parameter, wedge angle parameter, local Weissenberg number, unsteadiness parameter and Prandtl number on the non-dimensional velocity and temperature distributions have been evaluated. In addition, the numerical values of the local skin friction coefficient and the local Nusselt number are compared and presented through tables. The outcomes of this study indicate that the rate of heat transfer increases with the growth of both wedge angle parameter and unsteadiness parameter. Moreover, a substantial rise in the fluid velocity is observed with enhancement in the viscosity ratio parameter while an opposite trend is true for the non-dimensional temperature field. A comparison is presented between the current study and already published works and results found to be in outstanding agreement. Finally, the main findings of this article are highlighted in the last section.
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
TRUE MASSES OF RADIAL-VELOCITY EXOPLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Robert A., E-mail: rbrown@stsci.edu
We study the task of estimating the true masses of known radial-velocity (RV) exoplanets by means of direct astrometry on coronagraphic images to measure the apparent separation between exoplanet and host star. Initially, we assume perfect knowledge of the RV orbital parameters and that all errors are due to photon statistics. We construct design reference missions for four missions currently under study at NASA: EXO-S and WFIRST-S, with external star shades for starlight suppression, EXO-C and WFIRST-C, with internal coronagraphs. These DRMs reveal extreme scheduling constraints due to the combination of solar and anti-solar pointing restrictions, photometric and obscurational completeness,more » image blurring due to orbital motion, and the “nodal effect,” which is the independence of apparent separation and inclination when the planet crosses the plane of the sky through the host star. Next, we address the issue of nonzero uncertainties in RV orbital parameters by investigating their impact on the observations of 21 single-planet systems. Except for two—GJ 676 A b and 16 Cyg B b, which are observable only by the star-shade missions—we find that current uncertainties in orbital parameters generally prevent accurate, unbiased estimation of true planetary mass. For the coronagraphs, WFIRST-C and EXO-C, the most likely number of good estimators of true mass is currently zero. For the star shades, EXO-S and WFIRST-S, the most likely numbers of good estimators are three and four, respectively, including GJ 676 A b and 16 Cyg B b. We expect that uncertain orbital elements currently undermine all potential programs of direct imaging and spectroscopy of RV exoplanets.« less
A new analysis of the effects of the Asian crisis of 1997 on emergent markets
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Liu, Y.
2007-07-01
This work is devoted to the study of the Asian crisis of 1997, and its consequences on emerging markets. We have done so by means of a phase transition model. We have analyzed the crashes on leading indices of Hong Kong (HSI), Turkey (XU100), Mexico (MMX), Brazil (BOVESPA) and Argentina (MERVAL). We were able to obtain optimum values for the critical date, corresponding to the most probable date of the crash. The estimation of the critical date was excellent except for the MERVAL index; this improvement is due to a previous analysis of the parameters involved. We only used data from before the true crash date in order to obtain the predicted critical date. This article's conclusions are largely obtained via ad hoc empirical methods.
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
Dynamic recrystallization behavior of an as-cast TiAl alloy during hot compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jianbo, E-mail: lijianbo1205@163.com; Liu, Yong, E-mail: yonliu@csu.edu.cn; Wang, Yan, E-mail: wangyan@csu.edu.cn
2014-11-15
High temperature compressive deformation behaviors of as-cast Ti–43Al–4Nb–1.4W–0.6B alloy were investigated at temperatures ranging from 1050 °C to 1200 °C, and strain rates from 0.001 s{sup −1} to 1 s{sup −1}. Electron back scattered diffraction technique, scanning electron microscopy and transmission electron microscopy were employed to investigate the microstructural evolutions and nucleation mechanisms of the dynamic recrystallization. The results indicated that the true stress–true strain curves show a dynamic flow softening behavior. The dependence of the peak stress on the deformation temperature and the strain rate can well be expressed by a hyperbolic-sine type equation. The activation energy decreases withmore » increasing the strain. The size of the dynamically recrystallized β grains decreases with increasing the value of the Zener–Hollomon parameter (Z). When the flow stress reaches a steady state, the size of β grains almost remains constant with increasing the deformation strain. The continuous dynamic recrystallization plays a dominant role in the deformation. In order to characterize the evolution of dynamic recrystallization volume fraction, the dynamic recrystallization kinetics was studied by Avrami-type equation. Besides, the role of β phase and the softening mechanism during the hot deformation was also discussed in details. - Highlights: • The size of DRXed β grains decreases with increasing the value of the Z. • The CDRX plays a dominant role in the deformation. • The broken TiB{sub 2} particles can promote the nucleation of DRX.« less
Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1
NASA Astrophysics Data System (ADS)
Rathna Kumar, S.
2017-09-01
In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio
2017-01-10
The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.
ERIC Educational Resources Information Center
Nieuwland, Mante S.
2013-01-01
People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual…
Cage-Busting Leadership. Educational Innovations Series
ERIC Educational Resources Information Center
Hess, Frederick M.
2013-01-01
A practical and entertaining volume, "Cage-Busting Leadership" will be of profound interest and value to school and district leaders--and to everyone with a stake in school improvement. Rick Hess aptly describes his aims at the start of this provocative book: "I believe that two things are true. It is true, as would-be reformers…
Lee, Jeong Wan
2008-01-01
This paper proposes a field calibration technique for aligning a wind direction sensor to the true north. The proposed technique uses the synchronized measurements of captured images by a camera, and the output voltage of a wind direction sensor. The true wind direction was evaluated through image processing techniques using the captured picture of the sensor with the least square sense. Then, the evaluated true value was compared with the measured output voltage of the sensor. This technique solves the discordance problem of the wind direction sensor in the process of installing meteorological mast. For this proposed technique, some uncertainty analyses are presented and the calibration accuracy is discussed. Finally, the proposed technique was applied to the real meteorological mast at the Daegwanryung test site, and the statistical analysis of the experimental testing estimated the values of stable misalignment and uncertainty level. In a strict sense, it is confirmed that the error range of the misalignment from the exact north could be expected to decrease within the credibility level. PMID:27873957
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goddard, L; Brodin, P; Mani, K
Purpose: SBRT allows the delivery of high dose radiation treatments to localized tumors while minimizing dose to surrounding tissues. Due to the large doses delivered, accurate contouring of organs at risk is essential. In this study, differences between the true spinal cord as seen using MRI and CT myelogram (CTM) have been assessed in patients with spinal metastases treated using SBRT. Methods: Ten patients were identified that have both a CTM and a MRI. Using rigid registration tools, the MRI was fused to the CTM. The thecal sac and true cord were contoured using each imaging modality. Images were exportedmore » and analyzed for similarity by computing the Dice similarity coefficient and the modified Hausdorff distance (greatest distance from a point in one set to the closest point in the other set). Results: The Dice coefficient was calculated for the thecal sac (0.81 ±0.06) and true cord (0.63 ±0.13). These two measures are correlated; however, some points show a low true cord overlap despite a high overlap for the thecal sac. The Hausdorff distance for structure comparisons was also calculated. For thecal sac structures, the average value, 1.6mm (±1.1), indicates good overlap. For true cord comparison, the average value, 0.3mm (±0.16), indicates very good overlap. The minimum Hausdorff distance between the true cord and thecal sac was on average 1.6mm (±0.9) Conclusion: The true cord position as seen in MRI and CTM is fairly constant, although care should be taken as large differences can be seen in individual patients. Avoidning the true cord in spine SBRT is critical, so the ability to visualize the true cord before performing SBRT to the vertebrae is essential. Here, CT myelogram appears an excellent, robust option, that can be obtained the day of treatment planning and is unaffected by uncertainties in image fusion.« less
7 CFR 1033.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1033.60 Section 1033.60... Handling Producer Price Differential § 1033.60 Handler's value of milk. For the purpose of computing a handler's obligation for producer milk, the market administrator shall determine for each month the value...
7 CFR 1126.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1126.60 Section 1126.60... Handling Producer Price Differential § 1126.60 Handler's value of milk. For the purpose of computing a handler's obligation for producer milk, the market administrator shall determine for each month the value...
7 CFR 1032.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1032.60 Section 1032.60... Handling Producer Price Differential § 1032.60 Handler's value of milk. For the purpose of computing a handler's obligation for producer milk, the market administrator shall determine for each month the value...
48 CFR 1852.234-2 - Earned Value Management System.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Earned Value Management... and Clauses 1852.234-2 Earned Value Management System. As prescribed in 1834.203-70(b) insert the following clause: Earned Value Management System (NOV 2006) (a) In the performance of this contract, the...
Gaussian mixture clustering and imputation of microarray data.
Ouyang, Ming; Welsh, William J; Georgopoulos, Panos
2004-04-12
In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.
Implications of δCP = -90∘ towards determining hierarchy and octant at T2K and T2K-II
NASA Astrophysics Data System (ADS)
Ghosh, Monojit; Goswami, Srubabati; Raut, Sushant K.
2017-02-01
The T2K experiment has provided the first hint for the best-fit value for the leptonic CP phase δCP ˜- 90∘ from neutrino data. This is now corroborated by the NOνA neutrino runs. We study the implications for neutrino mass hierarchy and octant of 𝜃23 in the context of this data assuming that the true value of δCP in nature is - 90∘. Based on simple arguments on degeneracies in the probabilities, we show that a clear signal of δCP = -90∘ coming from T2K neutrino (antineutrino) data is only possible if the true hierarchy is normal and the true octant is higher (lower). Thus, if the T2K neutrino and antineutrino data are fitted separately and both give the true value of δCP = -90∘, this will imply that nature has chosen the true hierarchy to be normal and 𝜃23 ≈ 45∘. However, we find that the combined fit of neutrino and antineutrino data will still point to true hierarchy as normal but the octant of 𝜃23 will remain undetermined. We do our analysis for both, the current projected exposure (7.8 × 1021 pot) and planned extended exposure (20 × 1021 pot). We also present the CP discovery potential of T2K emphasizing on the role of antineutrinos. We find that one of the main contributions of the antineutrino data is to remove the degenerate solutions with the wrong octant. Thus, the antineutrino run plays a more significant role for those hierarchy-octant combinations for which this degeneracy is present. If this degeneracy is absent, then only neutrino run gives a better result for fixed 𝜃13. However, if we marginalize over 𝜃13 then, sensitivity corresponding to mixed run can be better than pure neutrino run.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
Buonaccorsi, John P; Dalen, Ingvild; Laake, Petter; Hjartåker, Anette; Engeset, Dagrun; Thoresen, Magne
2015-04-15
Measurement error occurs when we observe error-prone surrogates, rather than true values. It is common in observational studies and especially so in epidemiology, in nutritional epidemiology in particular. Correcting for measurement error has become common, and regression calibration is the most popular way to account for measurement error in continuous covariates. We consider its use in the context where there are validation data, which are used to calibrate the true values given the observed covariates. We allow for the case that the true value itself may not be observed in the validation data, but instead, a so-called reference measure is observed. The regression calibration method relies on certain assumptions.This paper examines possible biases in regression calibration estimators when some of these assumptions are violated. More specifically, we allow for the fact that (i) the reference measure may not necessarily be an 'alloyed gold standard' (i.e., unbiased) for the true value; (ii) there may be correlated random subject effects contributing to the surrogate and reference measures in the validation data; and (iii) the calibration model itself may not be the same in the validation study as in the main study; that is, it is not transportable. We expand on previous work to provide a general result, which characterizes potential bias in the regression calibration estimators as a result of any combination of the violations aforementioned. We then illustrate some of the general results with data from the Norwegian Women and Cancer Study. Copyright © 2015 John Wiley & Sons, Ltd.
Joining direct and indirect inverse calibration methods to characterize karst, coastal aquifers
NASA Astrophysics Data System (ADS)
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio
2016-04-01
Parameter estimation is extremely relevant for accurate simulation of groundwater flow. Parameter values for models of large-scale catchments are usually derived from a limited set of field observations, which can rarely be obtained in a straightforward way from field tests or laboratory measurements on samples, due to a number of factors, including measurement errors and inadequate sampling density. Indeed, a wide gap exists between the local scale, at which most of the observations are taken, and the regional or basin scale, at which the planning and management decisions are usually made. For this reason, the use of geologic information and field data is generally made by zoning the parameter fields. However, pure zoning does not perform well in the case of fairly complex aquifers and this is particularly true for karst aquifers. In fact, the support of the hydraulic conductivity measured in the field is normally much smaller than the cell size of the numerical model, so it should be upscaled to a scale consistent with that of the numerical model discretization. Automatic inverse calibration is a valuable procedure to identify model parameter values by conditioning on observed, available data, limiting the subjective evaluations introduced with the trial-and-error technique. Many approaches have been proposed to solve the inverse problem. Generally speaking, inverse methods fall into two groups: direct and indirect methods. Direct methods allow determination of hydraulic conductivities from the groundwater flow equations which relate the conductivity and head fields. Indirect methods, instead, can handle any type of parameters, independently from the mathematical equations that govern the process, and condition parameter values and model construction on measurements of model output quantities, compared with the available observation data, through the minimization of an objective function. Both approaches have pros and cons, depending also on model complexity. For this reason, a joint procedure is proposed by merging both direct and indirect approaches, thus taking advantage of their strengths, first among them the possibility to get a hydraulic head distribution all over the domain, instead of a zonation. Pros and cons of such an integrated methodology, so far unexplored to the authors' knowledge, are derived after application to a highly heterogeneous karst, coastal aquifer located in southern Italy.
Tarascio, Michela; Leo, Laura Anna; Klersy, Catherine; Murzilli, Romina; Moccetti, Tiziano; Faletra, Francesco Fulvio
2017-07-01
Identification of the extent of scar transmurality in chronic ischemic heart disease is important because it correlates with viability. The aim of this retrospective study was to evaluate whether layer-specific two-dimensional speckle-tracking echocardiography allows distinction of scar presence and transmurality. A total of 70 subjects, 49 with chronic ischemic cardiomyopathy and 21 healthy subjects, underwent two-dimensional speckle-tracking echocardiography and late gadolinium-enhanced cardiac magnetic resonance. Scar extent was determined as the relative amount of hyperenhancement using late gadolinium-enhanced cardiac magnetic resonance in an 18-segment model (0% hyperenhancement = normal; 1%-50% = subendocardial scar; 51%-100% = transmural scar). In the same 18-segment model, peak systolic circumferential strain and longitudinal strain were calculated separately for the endocardial and epicardial layers as well as the full-wall myocardial thickness. All strain parameters showed cutoff values (area under the curve > 0.69) that allowed the discrimination of normal versus scar segments but not of transmural versus subendocardial scars. This was true for all strain parameters analyzed, without differences in efficacy between longitudinal and circumferential strain and subendocardial, subepicardial, and full-wall-thickness strain values. Circumferential and longitudinal strain in normal segments showed transmural and basoapical gradients (greatest values at the subendocardial layer and apex). In segments with scar, transmural gradient was maintained, whereas basoapical gradient was lost because the reduction of strain values in the presence of the scar was greater at the apex. The two-dimensional speckle-tracking echocardiographic values distinguish scar presence but not transmurality; thus, they are not useful predictors of scar segment viability. It remains unclear why there is a greater strain value reduction in the presence of a scar at the apical level. Copyright © 2017 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Retooling Predictive Relations for non-volatile PM by Comparison to Measurements
NASA Astrophysics Data System (ADS)
Vander Wal, R. L.; Abrahamson, J. P.
2015-12-01
Non-volatile particulate matter (nvPM) emissions from jet aircraft at cruise altitude are of particular interest for climate and atmospheric processes but are difficult to measure and are normally approximated. To provide such inventory estimates the present approach is to use measured, ground-based values with scaling to cruise (engine operating) conditions. Several points are raised by this approach. First is what ground based values to use. Empirical and semi-empirical approaches, such as the revised first order approximation (FOA3) and formation-oxidation (FOX) methods, each with embedded assumptions are available to calculate a ground-based black carbon concentration, CBC. Second is the scaling relation that can depend upon the ratios of fuel-air equivalence, pressure, and combustor flame temperature. We are using measured ground-based values to evaluate the accuracy of present methods towards developing alternative methods for CBCby smoke number or via a semi-empirical kinetic method for the specific engine, CFM56-2C, representative of a rich-dome style combustor, and as one of the most prevalent engine families in commercial use. Applying scaling relations to measured ground based values and comparison to measurements at cruise evaluates the accuracy of current scaling formalism. In partnership with GE Aviation, performing engine cycle deck calculations enables critical comparison between estimated or predicted thermodynamic parameters and true (engine) operational values for the CFM56-2C engine. Such specific comparisons allow tracing differences between predictive estimates for, and measurements of nvPM to their origin - as either divergence of input parameters or in the functional form of the predictive relations. Such insights will lead to development of new predictive tools for jet aircraft nvPM emissions. Such validated relations can then be extended to alternative fuels with confidence in operational thermodynamic values and functional form. Comparisons will then be made between these new predictive relationships and measurements of nvPM from alternative fuels using ground and cruise data - as collected during NASA-led AAFEX and ACCESS field campaigns, respectively.
Saito, Kazuhiro; Yoshimura, Nobutaka; Shirota, Natsuhiko; Saguchi, Toru; Sugimoto, Katsutoshi; Tokuuye, Koichi
2016-10-01
The aim of this study to evaluate the effectiveness of enhanced diffusion-weighted imaging (DWI) for distinguishing liver haemangiomas from metastatic tumours (mets). This study included 23 patients with 27 haemangiomas and 26 patients with 46 mets. Breath-holding diffusion-weighted imaging (DWI) (b-values of 0, 50, 100, 150, 200, 400 and 800 s/mm 2 ) were obtained before and 20 min after injection of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA). Lesion contrast-to-noise ratios (CNRs) were calculated. The data were processed using the bi-exponential model of intravoxel incoherent motion (IVIM). Receiver operating characteristic analysis was performed to compare the diagnostic performance when distinguishing haemangioma from mets. The CNRs of haemangioma and mets at post-contrast enhancement increased. All IVIM parameters for liver haemangioma and mets showed no significant differences between pre- and post-contrast enhancement. The highest A z value of CNR and IVIM parameters occurred at a post-contrast b-value of 0 s/mm 2 and true diffusion (D). The highest qualitative evaluation occurred at a b-value of 800 s/mm 2 . The sensitivity and specificity, with a CNR of 100 or higher at a post-contrast b-value of 0 s/mm 2 and considered to be haemangioma, were 89% and 67% (<10 mm, 91%, 77%) respectively. The sensitivity and specificity, when D was higher than 1.4 × 10 -3 mm 2 /s, were 74% and 83% (<10 mm, 64%, 77%) respectively. The sensitivity and specificity of qualitative evaluation by enhanced DWI were 74% and 76% (<10 mm, 64%, 80%) respectively. The accuracy of the CNR was highest with b = 0; however, examination at high b-values had advantages in the qualitative evaluation of some small-size lesions. © 2016 The Royal Australian and New Zealand College of Radiologists.
Beatty, William; Jay, Chadwick V.; Fischbach, Anthony S.
2016-01-01
State-space models offer researchers an objective approach to modeling complex animal location data sets, and state-space model behavior classifications are often assumed to have a link to animal behavior. In this study, we evaluated the behavioral classification accuracy of a Bayesian state-space model in Pacific walruses using Argos satellite tags with sensors to detect animal behavior in real time. We fit a two-state discrete-time continuous-space Bayesian state-space model to data from 306 Pacific walruses tagged in the Chukchi Sea. We matched predicted locations and behaviors from the state-space model (resident, transient behavior) to true animal behavior (foraging, swimming, hauled out) and evaluated classification accuracy with kappa statistics (κ) and root mean square error (RMSE). In addition, we compared biased random bridge utilization distributions generated with resident behavior locations to true foraging behavior locations to evaluate differences in space use patterns. Results indicated that the two-state model fairly classified true animal behavior (0.06 ≤ κ ≤ 0.26, 0.49 ≤ RMSE ≤ 0.59). Kernel overlap metrics indicated utilization distributions generated with resident behavior locations were generally smaller than utilization distributions generated with true foraging behavior locations. Consequently, we encourage researchers to carefully examine parameters and priors associated with behaviors in state-space models, and reconcile these parameters with the study species and its expected behaviors.
Zenker, Sven
2010-08-01
Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
NASA Astrophysics Data System (ADS)
Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan
2017-03-01
Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.
Methods for consistent forewarning of critical events across multiple data channels
Hively, Lee M.
2006-11-21
This invention teaches further method improvements to forewarn of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves conversion of time-serial data into equiprobable symbols. A second improvement is a method to maximize the channel-consistent total-true rate of forewarning from a plurality of data channels over multiple data sets from the same patient or process. This total-true rate requires resolution of the forewarning indications into true positives, true negatives, false positives and false negatives. A third improvement is the use of various objective functions, as derived from the phase-space dissimilarity measures, to give the best forewarning indication. A fourth improvement uses various search strategies over the phase-space analysis parameters to maximize said objective functions. A fifth improvement shows the usefulness of the method for various biomedical and machine applications.
Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit
NASA Technical Reports Server (NTRS)
Barry, B. F.; Pimm, R. S.; Rowe, C. K.
1971-01-01
In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
NASA Astrophysics Data System (ADS)
Ghadiri, Majid; Shafiei, Navvab
2016-04-01
In this study, thermal vibration of rotary functionally graded Timoshenko microbeam has been analyzed based on modified couple stress theory considering temperature change in four types of temperature distribution on thermal environment. Material properties of FG microbeam are supposed to be temperature dependent and vary continuously along the thickness according to the power-law form. The axial forces are also included in the model as the thermal and true spatial variation due to the rotation. Governing equations and boundary conditions have been derived by employing Hamiltonian's principle. The differential quadrature method is employed to solve the governing equations for cantilever and propped cantilever boundary conditions. Validations are done by comparing available literatures and obtained results which indicate accuracy of applied method. Results represent effects of temperature changes, different boundary conditions, nondimensional angular velocity, length scale parameter, different boundary conditions, FG index and beam thickness on fundamental, second and third nondimensional frequencies. Results determine critical values of temperature changes and other essential parameters which can be applicable to design micromachines like micromotor and microturbine.
Scott, David J; Patel, Trushar R; Winzor, Donald J
2013-04-15
Theoretical consideration is given to the effect of cosolutes (including buffer and electrolyte components) on the determination of second virial coefficients for proteins by small-angle X-ray scattering (SAXS)-a factor overlooked in current analyses in terms of expressions for a two-component system. A potential deficiency of existing practices is illustrated by reassessment of published results on the effect of polyethylene glycol concentration on the second virial coefficient for urate oxidase. This error reflects the substitution of I(0,c3,0), the scattering intensity in the limit of zero scattering angle and solute concentration, for I(0,0,0), the corresponding parameter in the limit of zero cosolute concentration (c3) as well. Published static light scattering results on the dependence of the apparent molecular weight of ovalbumin on buffer concentration are extrapolated to zero concentration to obtain the true value (M2) and thereby establish the feasibility of obtaining the analogous SAXS parameter, I(0,0,0), experimentally. Copyright © 2013 Elsevier Inc. All rights reserved.
On radiative heat transfer in stagnation point flow of MHD Carreau fluid over a stretched surface
NASA Astrophysics Data System (ADS)
Khan, Masood; Sardar, Humara; Mudassar Gulzar, M.
2018-03-01
This paper investigates the behavior of MHD stagnation point flow of Carreau fluid in the presence of infinite shear rate viscosity. Additionally heat transfer analysis in the existence of non-linear radiation with convective boundary condition is performed. Moreover effects of Joule heating is observed and mathematical analysis is presented in the presence of viscous dissipation. The suitable transformations are employed to alter the leading partial differential equations to a set of ordinary differential equations. The subsequent non-straight common ordinary differential equations are solved numerically by an effective numerical approach specifically Runge-Kutta Fehlberg method alongside shooting technique. It is found that the higher values of Hartmann number (M) correspond to thickening of the thermal and thinning of momentum boundary layer thickness. The analysis further reveals that the fluid velocity is diminished by increasing the viscosity ratio parameter (β∗) and opposite trend is observed for temperature profile for both hydrodynamic and hydromagnetic flows. In addition the momentum boundary layer thickness is increased with velocity ratio parameter (α) and opposite is true for thermal boundary layer thickness.
26 CFR 1.507-7 - Value of assets.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Value of assets. 1.507-7 Section 1.507-7 Internal... TAXES (CONTINUED) Private Foundations § 1.507-7 Value of assets. (a) In general. For purposes of section 507(c), the value of the net assets shall be determined at whichever time such value is higher: (1...
Al-Masri, M R
2003-04-01
In vitro organic matter apparent digestibility (IVOMAD), true digestibility (IVOMTD), metabolizable energy (ME), net energy lactation (NEL), microbial nitrogen (MN) and synthesis of microbial biomass (MBM) were estimated to predict the nutritive values of some agricultural by-products, drought-tolerant range plants and browses. The relationships between in vitro gas production (GP), and true or apparent digestibility. MN and MBM were studied utilizing an in vitro incubation technique. The values of IVOMAD, IVOMTD, ME, NEL, GP, MBM and MN varied with the studied experimental materials. The true fermentation of the outside part of Atriplex leucoclada produced a higher volume of gas than the middle or the inside parts, and this was associated with an increase in the values of IVOMAD, IVOMTD, ME and NEL. However, screening off the wood from olive cake to obtain olive cake pulp increased the IVOMAD, IVOMTD, ME, NEL and the volume of gas production from the true fermented material. One ml of gas was generated from the true degradation of 5 mg of wheat straw, Moringa oleifera, Alhagi camelorum, Eucaliptus camaldulensis and A. leucoclada, from 11 mg of Prosopsis stephaniana and olive cake pulp, and from 20 mg of olive cake or olive cake wood. The amount of MN or MBM produced from 100 mg of truly fermented organic matter depended on the kind of the fermented material and amounted to 0.7-2.9 mg or 8-34 mg, respectively. Crude fibre was negatively correlated to IVOMAD, IVOMTD, ME and NEL. Gas production was positively correlated to IVOMAD and IVOMTD but negatively correlated to MBM and MN.
[Computer Program PEDAGE -- MARKTF-M5-F4.
ERIC Educational Resources Information Center
Toronto Univ. (Ontario). Dept. of Geology.
The computer program MARKTF-M5, written in FORTRAN IV, scores tests (consisting of true-or-false statement about concepts or facts) by comparing the list of true or false values prepared by the instructor with those from the students. The output consists of information to the supervisor about the performance of the students, primarily for his…
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.
On Correlated-noise Analyses Applied to Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison
2017-01-01
Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.
Navarro, Freddy; Ramírez-Sarmiento, César A; Guixé, Victoria
2013-10-01
Pyridoxal 5'-phosphate is the active form of vitamin B6 and its deficiency is directly related with several human disorders, which make human pyridoxal kinase (hPLK) an important pharmacologic target. In spite of this, a carefully kinetic characterization of hPLK including the main species that regulates the enzymatic activity is at date missing. Here we analyse the catalytic and regulatory mechanisms of hPLK as a function of a precise determination of the species involved in metal-nucleotide equilibriums and describe new regulatory mechanisms for this enzyme. hPLK activity is supported by several metals, being Zn(2+) the most effective, although the magnitude of the effect observed is highly dependent on the relative concentrations of metal and nucleotide used. The true substrate for the reaction catalyzed by hPLK is the metal nucleotide complex, while ATP(4-) and HATP(3-) did not affect the activity. The enzyme presents substrate inhibition by both pyridoxal (PL) and ZnATP(2-), although the latter behaves as a weakly inhibitor. Our study also established, for the first time, a dual role for free Zn(2+); as an activator at low concentrations (19 μM optimal concentration) and as a potent inhibitor with a IC50 of 37 μM. These results highlighted the importance of an accurate estimation of the actual concentration of the species involved in metal-nucleotide equilibriums in order to obtain reliable values for the kinetic parameters, and for determine the true regulators of the PLK activity. They also help to explain the dissimilar kinetic parameters reported in the literature for this enzyme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Can I conduct short-term experimental... I conduct short-term experimental production runs that cause parameters to deviate from operating limits? With the approval of the Administrator, you may conduct short-term experimental production runs...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Can I conduct short-term experimental... I conduct short-term experimental production runs that cause parameters to deviate from operating limits? With the approval of the Administrator, you may conduct short-term experimental production runs...
Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H
2015-01-01
Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Singh, Sudhir Kumar; Jacintha, T. German Amali; Nemčić-Jurec, Jasna; Tripathi, Vinod Kumar
2017-12-01
A review has been made to understand the hydrogeochemical behaviour of groundwater through statistical analysis of long term water quality data (year 2005-2013). Water Quality Index ( WQI), descriptive statistics, Hurst exponent, fractal dimension and predictability index were estimated for each water parameter. WQI results showed that majority of samples fall in moderate category during 2005-2013, but monitoring site four falls under severe category (water unfit for domestic use). Brownian time series behaviour (a true random walk nature) exists between calcium (Ca^{2+}) and electric conductivity (EC); magnesium (Mg^{2+}) with EC; sodium (Na+) with EC; sulphate (SO4^{2-}) with EC; total dissolved solids (TDS) with chloride (Cl-) during pre- (2005-2013) and post- (2006-2013) monsoon season. These parameters have a closer value of Hurst exponent ( H) with Brownian time series behaviour condition (H=0.5). The result of times series analysis of water quality data shows a persistent behaviour (a positive autocorrelation) that has played a role between Cl- and Mg^{2+}, Cl- and Ca^{2+}, TDS and Na+, TDS and SO4^{2-}, TDS and Ca^{2+} in pre- and post-monsoon time series because of the higher value of H (>1). Whereas an anti-persistent behaviour (or negative autocorrelation) was found between Cl- and EC, TDS and EC during pre- and post-monsoon due to low value of H. The work outline shows that the groundwater of few areas needs treatment before direct consumption, and it also needs to be protected from contamination.
7 CFR 1131.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1131.60 Section 1131.60... Handling Uniform Prices § 1131.60 Handler's value of milk. For the purpose of computing a handler's obligation for producer milk, the market administrator shall determine for each month the value of milk of...
7 CFR 1030.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1030.60 Section 1030.60... Regulating Handling Producer Price Differential § 1030.60 Handler's value of milk. For the purpose of... the value of milk of each handler with respect to each of the handler's pool plants and of each...
7 CFR 1124.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1124.60 Section 1124.60... Regulating Handling Producer Price Differential § 1124.60 Handler's value of milk. For the purpose of... the value of milk of each handler with respect to each of the handler's pool plants and of each...
The true invariant of an arbitrage free portfolio
NASA Astrophysics Data System (ADS)
Schmidt, Anatoly B.
2003-03-01
It is shown that the arbitrage free portfolio paradigm being applied to a portfolio with an arbitrary number of shares N allows for the extended solution in which the option price F depends on N. However the resulting stock hedging expense Q= MF (where M is the number of options in the portfolio) does not depend on whether N is treated as an independent variable or as a parameter. Therefore the stock hedging expense is the true invariant of the arbitrage free portfolio paradigm.
Horton, G.E.; Letcher, B.H.
2008-01-01
The inability to account for the availability of individuals in the study area during capture-mark-recapture (CMR) studies and the resultant confounding of parameter estimates can make correct interpretation of CMR model parameter estimates difficult. Although important advances based on the Cormack-Jolly-Seber (CJS) model have resulted in estimators of true survival that work by unconfounding either death or recapture probability from availability for capture in the study area, these methods rely on the researcher's ability to select a method that is correctly matched to emigration patterns in the population. If incorrect assumptions regarding site fidelity (non-movement) are made, it may be difficult or impossible as well as costly to change the study design once the incorrect assumption is discovered. Subtleties in characteristics of movement (e.g. life history-dependent emigration, nomads vs territory holders) can lead to mixtures in the probability of being available for capture among members of the same population. The result of these mixtures may be only a partial unconfounding of emigration from other CMR model parameters. Biologically-based differences in individual movement can combine with constraints on study design to further complicate the problem. Because of the intricacies of movement and its interaction with other parameters in CMR models, quantification of and solutions to these problems are needed. Based on our work with stream-dwelling populations of Atlantic salmon Salmo salar, we used a simulation approach to evaluate existing CMR models under various mixtures of movement probabilities. The Barker joint data model provided unbiased estimates of true survival under all conditions tested. The CJS and robust design models provided similarly unbiased estimates of true survival but only when emigration information could be incorporated directly into individual encounter histories. For the robust design model, Markovian emigration (future availability for capture depends on an individual's current location) was a difficult emigration pattern to detect unless survival and especially recapture probability were high. Additionally, when local movement was high relative to study area boundaries and movement became more diffuse (e.g. a random walk), local movement and permanent emigration were difficult to distinguish and had consequences for correctly interpreting the survival parameter being estimated (apparent survival vs true survival). ?? 2008 The Authors.
NOMINAL VALUES FOR SELECTED SOLAR AND PLANETARY QUANTITIES: IAU 2015 RESOLUTION B3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, Andrej; Harmanec, Petr; Torres, Guillermo
In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the timemore » of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.« less
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079
Nominal Values for Selected Solar and Planetary Quantities: IAU 2015 Resolution B3
NASA Astrophysics Data System (ADS)
Prša, Andrej; Harmanec, Petr; Torres, Guillermo; Mamajek, Eric; Asplund, Martin; Capitaine, Nicole; Christensen-Dalsgaard, Jørgen; Depagne, Éric; Haberreiter, Margit; Hekker, Saskia; Hilton, James; Kopp, Greg; Kostov, Veselin; Kurtz, Donald W.; Laskar, Jacques; Mason, Brian D.; Milone, Eugene F.; Montgomery, Michele; Richards, Mercedes; Schmutz, Werner; Schou, Jesper; Stewart, Susan G.
2016-08-01
In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the time of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.
Lucky Belief in Science Education - Gettier Cases and the Value of Reliable Belief-Forming Processes
NASA Astrophysics Data System (ADS)
Brock, Richard
2018-05-01
The conceptualisation of knowledge as justified true belief has been shown to be, at the very least, an incomplete account. One challenge to the justified true belief model arises from the proposition of situations in which a person possesses a belief that is both justified and true which some philosophers intuit should not be classified as knowledge. Though situations of this type have been imagined by a number of writers, they have come to be labelled Gettier cases. Gettier cases arise when a fallible justification happens to lead to a true belief in one context, a case of `lucky belief'. In this article, it is argued that students studying science may make claims that resemble Gettier cases. In some contexts, a student may make a claim that is both justified and true but which arises from an alternative conception of a scientific concept. A number of instances of lucky belief in topics in science education are considered leading to an examination of the criteria teachers use to assess students' claims in different contexts. The possibility of lucky belief leads to the proposal that, in addition to the acquisition of justified true beliefs, the development of reliable belief-forming processes is a significant goal of science education. The pedagogic value of various kinds of claims is considered and, it is argued, the criteria used to judge claims may be adjusted to suit the context of assessment. It is suggested that teachers should be alert to instances of lucky belief that mask alternative conceptions.
Symmetry breaking by bifundamentals
NASA Astrophysics Data System (ADS)
Schellekens, A. N.
2018-03-01
We derive all possible symmetry breaking patterns for all possible Higgs fields that can occur in intersecting brane models: bifundamentals and rank-2 tensors. This is a field-theoretic problem that was already partially solved in 1973 by Ling-Fong Li [1]. In that paper the solution was given for rank-2 tensors of orthogonal and unitary group, and U (N )×U (M ) and O (N )×O (M ) bifundamentals. We extend this first of all to symplectic groups. When formulated correctly, this turns out to be straightforward generalization of the previous results from real and complex numbers to quaternions. The extension to mixed bifundamentals is more challenging and interesting. The scalar potential has up to six real parameters. Its minima or saddle points are described by block-diagonal matrices built out of K blocks of size p ×q . Here p =q =1 for the solutions of Ling-Fong Li, and the number of possibilities for p ×q is equal to the number of real parameters in the potential, minus 1. The maximum block size is p ×q =2 ×4 . Different blocks cannot be combined, and the true minimum occurs for one choice of basic block, and for either K =1 or K maximal, depending on the parameter values.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Shim, Woo Hyun; Kim, Ho Sung; Choi, Choong-Gon; Kim, Sang Joon
2015-01-01
Brain tumor cellularity has been assessed by using apparent diffusion coefficient (ADC). However, the ADC value might be influenced by both perfusion and true molecular diffusion, and the perfusion effect on ADC can limit the reliability of ADC in the characterization of tumor cellularity, especially, in hypervascular brain tumors. In contrast, the IVIM technique estimates parameter values for diffusion and perfusion effects separately. The purpose of our study was to compare ADC and IVIM for differentiating among glioblastoma, metastatic tumor, and primary CNS lymphoma (PCNSL) focusing on diffusion-related parameter. We retrospectively reviewed the data of 128 patients with pathologically confirmed glioblastoma (n = 55), metastasis (n = 31), and PCNSL (n = 42) prior to any treatment. Two neuroradiologists independently calculated the maximum IVIM-f (fmax) and minimum IVIM-D (Dmin) by using 16 different b-values with a bi-exponential fitting of diffusion signal decay, minimum ADC (ADCmin) by using 0 and 1000 b-values with a mono-exponential fitting and maximum normalized cerebral blood volume (nCBVmax). The differences in fmax, Dmin, nCBVmax, and ADCmin among the three tumor pathologies were determined by one-way ANOVA with multiple comparisons. The fmax and Dmin were correlated to the corresponding nCBV and ADC using partial correlation analysis, respectively. Using a mono-exponential fitting of diffusion signal decay, the mean ADCmin was significantly lower in PCNSL than in glioblastoma and metastasis. However, using a bi-exponential fitting, the mean Dmin did not significantly differ in the three groups. The mean fmax significantly increased in the glioblastomas (reader 1, 0.103; reader 2, 0.109) and the metastasis (reader 1, 0.105; reader 2, 0.107), compared to the primary CNS lymphomas (reader 1, 0.025; reader 2, 0.023) (P < .001 for each). The correlation between fmax and the corresponding nCBV was highest in glioblastoma group, and the correlation between Dmin and the corresponding ADC was highest in primary CNS lymphomas group. Unlike ADC value derived from a mono-exponential fitting of diffusion signal, diffusion-related parametric value derived from a bi-exponential fitting with separation of perfusion effect doesn't differ among glioblastoma, metastasis, and PCNSL.
Determining the Full Halo Coronal Mass Ejection Characteristics
NASA Astrophysics Data System (ADS)
Fainshtein, V. G.
2010-11-01
Observing halo coronal mass ejections (HCMEs) in the coronagraph field of view allows one to only determine the apparent parameters in the plane of the sky. Recently, several methods have been proposed allowing one to find some true geometrical and kinematical parameters of HCMEs. In most cases, a simple cone model was used to describe the CME shape. Observations show that various modifications of the cone model ("ice cream models") are most appropriate for describing the shapes of individual CMEs. This paper uses the method of determining full HCME parameters proposed by the author earlier, for determining the parameters of 45 full HCMEs, with various modifications of their shapes. I show that the determined CME characteristics depend significantly on the chosen CME shape. I conclude that the absence of criteria for a preliminary evaluation of the CME shape is a major source of error in determining the true parameters of a full HCME with any of the known methods. I show that, regardless of the chosen CME form, the trajectory of practically all the HCMEs in question deviate from the radial direction towards the Sun-Earth axis at the initial stage of their movement, and their angular size, on average, significantly exceeds that of all the observable CMEs.
Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.
2016-01-01
There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2014-06-01
We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value vh. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self-coupling are fixed when we vary vh. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental law in our case.
Application of the SEM to the measurement of solar cell parameters
NASA Technical Reports Server (NTRS)
Weizer, V. G.; Andrews, C. W.
1977-01-01
A pair of techniques are described which make use of the SEM to measure, respectively, the minority carrier diffusion length and the metallurgical junction depth in silicon solar cells. The former technique permits the measurement of the true bulk diffusion length through the application of highly doped field layers to the back surfaces of the cells being investigated. The technique yields an absolute value of the diffusion length from a knowledge of the collected fraction of the injected carriers and the cell thickness. It is shown that the secondary emission contrast observed in the SEM on a reverse-biased diode can depict the location of the metallurgical junction if the diode has been prepared with the proper beveled geometry. The SEM provides the required contrast and the option of high magnification, permitting the measurement of extremely shallow junction depths.
Bouzid, A; Kehila, M; Trabelsi, H; Abouda, H S; Ben Hmid, R; Chanoufi, M B
2017-04-01
To evaluate discrimination of clinical parameters and ultrasound examination to differentiate "false labor" and "true labor". In a prospective study during a period of 6 months, a total of 178 patients in term (37-41 weeks) consulting our obstetric unit for uterine contraction, were enrolled. Patients were examined separately by a midwife and a resident and separated into "true labor group" and "false labor group". The clinical characteristics of true versus false labor patients were compared. ROC curves were developed to determine an optimal cervical length and uterocervical angle for prediction of true labor. The prevalence of real labor was 57.3%. Patients who were in true labor had more painful and more frequent contractions. The "true labor" group had shorter cervical length and larger uterocervical angle. The optimal CL cut-off was 1.4mm with a specificity of 73% (RR 4.3, sensibility 63%, PPV 14%, NPV 95%). The optimal UCA cut off was 123° (RR 6.7, sensitivity 50%, specificity of 83%, PPV 10%, NPV 96%). The best performance was demonstrated by combined testing, yielding LHR+ that rich 13. In this study, we reported a new application of ultrasound to identify false labor and avoid unnecessary hospitalization with obstetric and adverse economic impacts. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Electrical and magnetic properties of rock and soil
Scott, J.H.
1983-01-01
Field and laboratory measurements have been made to determine the electrical conductivity, dielectric constant, and magnetic permeability of rock and soil in areas of interest in studies of electromagnetic pulse propagation. Conductivity is determined by making field measurements of apparent resisitivity at very low frequencies (0-20 cps), and interpreting the true resistivity of layers at various depths by curve-matching methods. Interpreted resistivity values are converted to corresponding conductivity values which are assumed to be applicable at 10^2 cps, an assumption which is considered valid because the conductivity of rock and soil is nearly constant at frequencies below 10^2 cps. Conductivity is estimated at higher frequencies (up to 10^6 cps) by using statistical correlations of three parameters obtained from laboratory measurements of rock and soil samples: conductivity at 10^2 cps, frequency and conductivity measured over the range 10^2 to 10^6 cps. Conductivity may also be estimated in this frequency range by using field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and conductivity measured over the range 10^2 to 10^6 cps. This method is less accurate because nonrandom variation of ion concentration in natural pore water introduces error. Dielectric constant is estimated in a similar manner from field-derived conductivity values applicable at 10^2 cps and statistical correlations of three parameters obtained from laboratory measurements of samples: conductivity measured at 10^2 cps, frequency, and dielectric constant measured over the frequency range 10^2 to 10^6 cps. Dielectric constant may also be estimated from field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and dielectric constant measured from 10^2 to 10^6 cps, but again, this method is less accurate because of variation of ion concentration of pore water. Special laboratory procedures are used to measure conductivity and dielectric constant of rock and soil samples. Electrode polarization errors are minimized by using an electrode system that is electrochemically reversible-with ions in pore water.
Lazoura, Olga; Ismail, Tevfik F; Pavitt, Christopher; Lindsay, Alistair; Sriharan, Mona; Rubens, Michael; Padley, Simon; Duncan, Alison; Wong, Tom; Nicol, Edward
2016-02-01
Assessment of the left atrial appendage (LAA) for thrombus and anatomy is important prior to atrial fibrillation (AF) ablation and LAA exclusion. The use of cardiovascular CT (CCT) to detect LAA thrombus has been limited by the high incidence of pseudothrombus on single-pass studies. We evaluated the diagnostic accuracy of a two-phase protocol incorporating a limited low-dose delayed contrast-enhanced examination of the LAA, compared with a single-pass study for LAA morphological assessment, and transesophageal echocardiography (TEE) for the exclusion of thrombus. Consecutive patients (n = 122) undergoing left atrial interventions for AF were assessed. All had a two-phase CCT protocol (first-past scan plus a limited, 60-s delayed scan of the LAA) and TEE. Sensitivity, specificity, diagnostic accuracy, positive (PPV) and negative predictive values (NPV) were calculated for the detection of true thrombus on first-pass and delayed scans, using TEE as the gold standard. Overall, 20/122 (16.4 %) patients had filling defects on the first-pass study. All affected the full delineation of the LAA morphology; 17/20 (85 %) were confirmed as pseudo-filling defects. Three (15 %) were seen on late-pass and confirmed as true thrombi on TEE; a significant improvement in diagnostic performance relative to a single-pass scan (McNemar Chi-square 17, p < 0.001). The sensitivity, specificity, diagnostic accuracy, PPV and NPV was 100, 85.7, 86.1, 15.0 and 100 % respectively for first-pass scans, and 100 % for all parameters for the delayed scans. The median (range) additional radiation dose for the delayed scan was 0.4 (0.2-0.6) mSv. A low-dose delayed scan significantly improves the identification of true LAA anatomy and thrombus in patients undergoing LA intervention.
Tunable Optical True-Time Delay Devices Would Exploit EIT
NASA Technical Reports Server (NTRS)
Kulikov, Igor; DiDomenico, Leo; Lee, Hwang
2004-01-01
Tunable optical true-time delay devices that would exploit electromagnetically induced transparency (EIT) have been proposed. Relative to prior true-time delay devices (for example, devices based on ferroelectric and ferromagnetic materials) and electronically controlled phase shifters, the proposed devices would offer much greater bandwidths. In a typical envisioned application, an optical pulse would be modulated with an ultra-wideband radio-frequency (RF) signal that would convey the information that one seeks to communicate, and it would be required to couple differently delayed replicas of the RF signal to the radiating elements of a phased-array antenna. One or more of the proposed devices would be used to impose the delays and/or generate the delayed replicas of the RF-modulated optical pulse. The beam radiated or received by the antenna would be steered by use of a microprocessor-based control system that would adjust operational parameters of the devices to tune the delays to the required values. EIT is a nonlinear quantum optical interference effect that enables the propagation of light through an initially opaque medium. A suitable medium must have, among other properties, three quantum states (see Figure 1): an excited state (state 3), an upper ground state (state 2), and a lower ground state (state 1). These three states must form a closed system that exhibits no decays to other states in the presence of either or both of two laser beams: (1) a probe beam having the wavelength corresponding to the photon energy equal to the energy difference between states 3 and 1; and (2) a coupling beam having the wavelength corresponding to the photon energy equal to the energy difference between states 3 and 2. The probe beam is the one that is pulsed and modulated with an RF signal.
Resolution Quality and Atom Positions in Sub-Angstrom Electron Microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Keefe, Michael A.; Allard, Lawrence F.; Blom, Douglas A.
2005-02-15
Ability to determine whether an image peak represents one single atom or several depends on resolution of the HR-(S)TEM. Rayleigh's resolution criterion, an accepted standard in optics, was derived as a means for judging when two image intensity peaks from two sources of light (stars) are distinguishable from a single source. Atom spacings closer than the Rayleigh limit have been resolved in HR-TEM, suggesting that it may be useful to consider other limits, such as the Sparrow resolution criterion. From the viewpoint of the materials scientist, it is important to be able to use the image to determine whether anmore » image feature represents one or more atoms (resolution), and where the atoms (or atom columns) are positioned relative to one another (resolution quality). When atoms and the corresponding image peaks are separated by more than the Rayleigh limit of the HR-(S)TEM, it is possible to adjust imaging parameters so that relative peak positions in the image correspond to relative atom positions in the specimen. When atoms are closer than the Rayleigh limit, we must find the relationship of the peak position to the atom position by peak fitting or, if we have a suitable model, by image simulation. Our Rayleigh-Sparrow parameter QRS reveals the ''resolution quality'' of a microscope image. QRS values greater than 1 indicate a clearly resolved twin peak, while values between 1 and 0 mean a lower-quality resolution and an image with peaks displaced from the relative atom positions. The depth of the twin-peak minimum can be used to determine the value of QRS and the true separation of the atom peaks that sum to produce the twin peak in the image. The Rayleigh-Sparrow parameter can be used to refine relative atom positions in defect images where atoms are closer than the Rayleigh limit of the HR-(S)TEM, reducing the necessity for full image simulations from large defect models.« less
Li, F P; Wang, H; Hou, J; Tang, J; Lu, Q; Wang, L L; Yu, X P
2018-05-03
To investigate the utility of intravoxel incoherent motion diffusion-weighted imaging (IVIM-DWI) in predicting the early response to concurrent chemoradiotherapy (CRT) in oesophageal squamous cell carcinoma (OSCC). Thirty-three patients with OSCC who received CRT underwent IVIM-DWI at three time points (before CRT, at the end of radiotherapy 20 Gy, and immediately after CRT). After CRT, the patients were divided into the responders (complete response or partial response) and the non-responders (stable disease) based on RECIST 1.1. The IVIM-DWI parameters (apparent diffusion coefficient [ADC], true diffusion coefficient [D], the pseudo-diffusion coefficient [D*], and the perfusion fraction [f]) values and their percentage changes (Δvalue) at different time points were compared between the responders and the non-responders. Receiver-operating characteristic (ROC) curve analysis was used to determine the efficacy of IVIM-DWI parameters in identifying the response to CRT. The tumour regression ratio showed negative correlations with ADC pre (r=-0.610, p=0.000), ADC 20 Gy (r=-0.518, p=0.002), D pre (r=-0.584, p=0.000), and D 20 Gy (r=-0.454, p=0.008), and positive correlation with ΔD 20 Gy (r=0.361, p=0.039) and ΔD post (r=0.626, p=0.000). Compared to the non-responders, the responders exhibited lower ADC pre , D pre , ADC 20 Gy , and D 20 Gy , as well as higher ΔADC 20 Gy , ΔD 20 Gy , and ΔD post (all p<0.05). D pre had the highest sensitivity (92.9%) and value of area under the ROC curve (0.865) in differentiating the responders from the non-responders. Diffusion-related IVIM-DWI parameters (ADC and D) are potentially helpful in predicting the early treatment effect of CRT in OSCC. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Can I conduct short-term experimental... Plan § 63.2990 Can I conduct short-term experimental production runs that cause parameters to deviate from operating limits? With the approval of the Administrator, you may conduct short-term experimental...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Requirements for Installation, Operation, and Maintenance of Continuous Parameter Monitoring Systems 41 Table 41 to Subpart UUU of Part 63... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 41 Table 41 to Subpart UUU of Part 63—Requirements for...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Requirements for Installation, Operation, and Maintenance of Continuous Parameter Monitoring Systems 41 Table 41 to Subpart UUU of Part 63... Reforming Units, and Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 41 Table 41 to Subpart UUU of Part 63...
Herbert, Cornelia; Kübler, Andrea
2011-01-01
The present study investigated event-related brain potentials elicited by true and false negated statements to evaluate if discrimination of the truth value of negated information relies on conscious processing and requires higher-order cognitive processing in healthy subjects across different levels of stimulus complexity. The stimulus material consisted of true and false negated sentences (sentence level) and prime-target expressions (word level). Stimuli were presented acoustically and no overt behavioral response of the participants was required. Event-related brain potentials to target words preceded by true and false negated expressions were analyzed both within group and at the single subject level. Across the different processing conditions (word pairs and sentences), target words elicited a frontal negativity and a late positivity in the time window from 600-1000 msec post target word onset. Amplitudes of both brain potentials varied as a function of the truth value of the negated expressions. Results were confirmed at the single-subject level. In sum, our results support recent suggestions according to which evaluation of the truth value of a negated expression is a time- and cognitively demanding process that cannot be solved automatically, and thus requires conscious processing. Our paradigm provides insight into higher-order processing related to language comprehension and reasoning in healthy subjects. Future studies are needed to evaluate if our paradigm also proves sensitive for the detection of consciousness in non-responsive patients.
7 CFR 1001.60 - Handler's value of milk.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 9 2014-01-01 2013-01-01 true Handler's value of milk. 1001.60 Section 1001.60... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE MILK IN THE NORTHEAST MARKETING AREA Order Regulating Handling Producer Price Differential § 1001.60 Handler's value of milk. For the purpose of computing a...
Postma, E
2006-03-01
The ability to predict individual breeding values in natural populations with known pedigrees has provided a powerful tool to separate phenotypic values into their genetic and environmental components in a nonexperimental setting. This has allowed sophisticated analyses of selection, as well as powerful tests of evolutionary change and differentiation. To date, there has, however, been no evaluation of the reliability or potential limitations of the approach. In this article, I address these gaps. In particular, I emphasize the differences between true and predicted breeding values (PBVs), which as yet have largely been ignored. These differences do, however, have important implications for the interpretation of, firstly, the relationship between PBVs and fitness, and secondly, patterns in PBVs over time. I subsequently present guidelines I believe to be essential in the formulation of the questions addressed in studies using PBVs, and I discuss possibilities for future research.
26 CFR 1.430(h)(3)-1 - Mortality tables used to determine present value.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Mortality tables used to determine present value... Mortality tables used to determine present value. (a) Basis for mortality tables—(1) In general. This section sets forth rules for the mortality tables to be used in determining present value or making any...
Conclusion of LOD-score analysis for family data generated under two-locus models.
Dizier, M H; Babron, M C; Clerget-Darpoux, F
1996-06-01
The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; ...
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Climatic extremes improve predictions of spatial patterns of tree species
Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.
2009-01-01
Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.
GNSS Ephemeris with Graceful Degradation and Measurement Fusion
NASA Technical Reports Server (NTRS)
Garrison, James Levi (Inventor); Walker, Michael Allen (Inventor)
2015-01-01
A method for providing an extended propagation ephemeris model for a satellite in Earth orbit, the method includes obtaining a satellite's orbital position over a first period of time, applying a least square estimation filter to determine coefficients defining osculating Keplarian orbital elements and harmonic perturbation parameters associated with a coordinate system defining an extended propagation ephemeris model that can be used to estimate the satellite's position during the first period, wherein the osculating Keplarian orbital elements include semi-major axis of the satellite (a), eccentricity of the satellite (e), inclination of the satellite (i), right ascension of ascending node of the satellite (.OMEGA.), true anomaly (.theta.*), and argument of periapsis (.omega.), applying the least square estimation filter to determine a dominant frequency of the true anomaly, and applying a Fourier transform to determine dominant frequencies of the harmonic perturbation parameters.
Calculating Measurement Uncertainty of the “Conventional Value of the Result of Weighing in Air”
Flicker, Celia J.; Tran, Hy D.
2016-04-02
The conventional value of the result of weighing in air is frequently used in commercial calibrations of balances. The guidance in OIML D-028 for reporting uncertainty of the conventional value is too terse. When calibrating mass standards at low measurement uncertainties, it is necessary to perform a buoyancy correction before reporting the result. When calculating the conventional result after calibrating true mass, the uncertainty due to calculating the conventional result is correlated with the buoyancy correction. We show through Monte Carlo simulations that the measurement uncertainty of the conventional result is less than the measurement uncertainty when reporting true mass.more » The Monte Carlo simulation tool is available in the online version of this article.« less
NASA Astrophysics Data System (ADS)
Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.
2017-12-01
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from this procedure closely approximate those from the "true" solution. This procedure can be easily extended to other random variables such as compression index and void ratio.
A study of invisible neutrino decay at DUNE and its effects on θ 23 measurement
NASA Astrophysics Data System (ADS)
Choubey, Sandhya; Goswami, Srubabati; Pramanik, Dipyaman
2018-02-01
We study the consequences of invisible decay of neutrinos in the context of the DUNE experiment. We assume that the third mass eigenstate is unstable and decays to a light sterile neutrino and a scalar or a pseudo-scalar. We consider DUNE running in 5 years neutrino and 5 years antineutrino mode and a detector volume of 40 kt. We obtain the expected sensitivity on the rest-frame life-time τ 3 normalized to the mass m 3 as τ3 /m 3 > 4 .50 × 10-11 s/eV at 90% C.L. for a normal hierarchical mass spectrum. We also find that DUNE can discover neutrino decay for τ3 /m 3 > 4 .27 × 10-11 s/eV at 90% C.L. In addition, for an unstable ν3 with an illustrative value of τ3 /m 3 = 1 .2 × 10-11 s/eV, the no decay case could get disfavoured at the 3 σ C.L. At 90% C.L. the expected precision range for this true value is obtained as 1 .71 × 10-11 > τ3 /m 3 > 9 .29 × 10-12 in units of s/eV. We also study the correlation between a non-zero τ3 /m 3 and standard oscillation parameters and find an interesting correlation in the appearance and disappearance channels with the mixing angle θ 23. This alters the octant sensitivity of DUNE, favorably (unfavorably) for true θ 23 in the lower (higher) octant. The effect of a decaying neutrino does not alter the hierarchy or CP violation discovery sensitivity of DUNE in a discernible way.
Statistical modelling of growth using a mixed model with orthogonal polynomials.
Suchocki, T; Szyda, J
2011-02-01
In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.
True Density Prediction of Garlic Slices Dehydrated by Convection.
López-Ortiz, Anabel; Rodríguez-Ramírez, Juan; Méndez-Lagunas, Lilia
2016-01-01
Physiochemical parameters with constant values are employed for the mass-heat transfer modeling of the air drying process. However, structural properties are not constant under drying conditions. Empirical, semi-theoretical, and theoretical models have been proposed to describe true density (ρp). These models only consider the ideal behavior and assume a linear relationship between ρp and moisture content (X); nevertheless, some materials exhibit a nonlinear behavior of ρp as a function of X with a tendency toward being concave-down. This comportment, which can be observed in garlic and carrots, has been difficult to model mathematically. This work proposes a semi-theoretical model for predicting ρp values, taking into account the concave-down comportment that occurs at the end of the drying process. The model includes the ρs dependency on external conditions (air drying temperature (Ta)), the inside temperature of the garlic slices (Ti ), and the moisture content (X) obtained from experimental data on the drying process. Calculations show that the dry solid density (ρs ) is not a linear function of Ta, X, and Ti . An empirical correlation for ρs is proposed as a function of Ti and X. The adjustment equation for Ti is proposed as a function of Ta and X. The proposed model for ρp was validated using experimental data on the sliced garlic and was compared with theoretical and empirical models that are available in the scientific literature. Deviation between the experimental and predicted data was determined. An explanation of the nonlinear behavior of ρs and ρp in the function of X, taking into account second-order phase changes, are then presented. © 2015 Institute of Food Technologists®
Wang, Qian; Song, Enmin; Jin, Renchao; Han, Ping; Wang, Xiaotong; Zhou, Yanying; Zeng, Jianchao
2009-06-01
The aim of this study was to develop a novel algorithm for segmenting lung nodules on three-dimensional (3D) computed tomographic images to improve the performance of computer-aided diagnosis (CAD) systems. The database used in this study consists of two data sets obtained from the Lung Imaging Database Consortium. The first data set, containing 23 nodules (22% irregular nodules, 13% nonsolid nodules, 17% nodules attached to other structures), was used for training. The second data set, containing 64 nodules (37% irregular nodules, 40% nonsolid nodules, 62% nodules attached to other structures), was used for testing. Two key techniques were developed in the segmentation algorithm: (1) a 3D extended dynamic programming model, with a newly defined internal cost function based on the information between adjacent slices, allowing parameters to be adapted to each slice, and (2) a multidirection fusion technique, which makes use of the complementary relationships among different directions to improve the final segmentation accuracy. The performance of this approach was evaluated by the overlap criterion, complemented by the true-positive fraction and the false-positive fraction criteria. The mean values of the overlap, true-positive fraction, and false-positive fraction for the first data set achieved using the segmentation scheme were 66%, 75%, and 15%, respectively, and the corresponding values for the second data set were 58%, 71%, and 22%, respectively. The experimental results indicate that this segmentation scheme can achieve better performance for nodule segmentation than two existing algorithms reported in the literature. The proposed 3D extended dynamic programming model is an effective way to segment sequential images of lung nodules. The proposed multidirection fusion technique is capable of reducing segmentation errors especially for no-nodule and near-end slices, thus resulting in better overall performance.
Explorations in statistics: hypothesis tests and P values.
Curran-Everett, Douglas
2009-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.
Comparison of the dye method with the thermocouple psychrometer for measuring leaf water potentials.
Knipling, E B; Kramer, P J
1967-10-01
The dye method for measuring water potential was examined and compared with the thermocouple psychrometer method in order to evaluate its usefulness for measuring leaf water potentials of forest trees and common laboratory plants. Psychrometer measurements are assumed to represent the true leaf water potentials. Because of the contamination of test solutions by cell sap and leaf surface residues, dye method values of most species varied about 1 to 5 bars from psychrometer values over the leaf water potential range of 0 to -30 bars. The dye method is useful for measuring changes and relative values in leaf potential. Because of species differences in the relationships of dye method values to true leaf water potentials, dye method values should be interpreted with caution when comparing different species or the same species growing in widely different environments. Despite its limitations the dye method has a usefulness to many workers because it is simple, requires no elaborate equipment, and can be used in both the laboratory and field.
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
Determination of Geometric and Kinematical Parameters of Coronal Mass Ejections Using STEREO Data
NASA Astrophysics Data System (ADS)
Fainshtein, V. G.; Tsivileva, D. M.; Kashapova, L. K.
2010-03-01
We present a new, relatively simple and fast method to determine true geometric and kinematical CME parameters from simultaneous STEREO A, B observations of CMEs. These parameters are the three-dimensional direction of CME propagation, velocity and acceleration of CME front, CME angular sizes and front position depending on time. The method is based on the assumption that CME shape may be described by a modification of so-called ice-cream cone models. The method has been tested for several CMEs.
NASA Astrophysics Data System (ADS)
Mishra, Karuna Kara; Bevara, Samatha; Ravindran, T. R.; Patwe, S. J.; Gupta, Mayanak K.; Mittal, Ranjan; Krishnan, R. Venkata; Achary, S. N.; Tyagi, A. K.
2018-02-01
Herein we reported structural stability, vibrational and thermal properties of K2Ce[PO4]2, a relatively underexplored complex phosphate of tetravalent Ce4+ from in situ high-pressure Raman spectroscopic investigations up to 28 GPa using a diamond anvil cell. The studies identified the soft phonons that lead to a reversible phase transformation above 8 GPa, and a phase coexistence of ambient (PI) and high pressure (PII) phases in a wider pressure region 6-11 GPa. From a visual representation of the computed eigen vector displacements, the Ag soft mode at 82 cm-1 is assigned as a lattice mode of K+ cation. Pressure-induced positional disorder is apparent from the substantial broadening of internal modes and the disappearance of low frequency lattice and external modes in phase PII above 18 GPa. Isothermal mode Grüneisen parameters γi of the various phonon modes are calculated and compared for several modes. Using these values, thermal properties such as average Grüneisen parameter, and thermal expansion coefficient are estimated as 0.47, and 2.5 × 10-6 K-1, respectively. The specific heat value was estimated from all optical modes obtained from DFT calculations as 314 J-mol-1 K-1. Our earlier reported temperature dependence of phonon frequencies is used to decouple the "true anharmonic" (explicit contribution at constant volume) and "quasi harmonic" (implicit contribution brought out by volume change) contributions from the total anharmonicity. In addition to the 81 cm-1 Ag lattice mode, several other lattice and external modes of PO43- ions are found to be strongly anharmonic.
Using periodic orbits to compute chaotic transport rates between resonance zones.
Sattari, Sulimon; Mitchell, Kevin A
2017-11-01
Transport properties of chaotic systems are computable from data extracted from periodic orbits. Given a sufficient number of periodic orbits, the escape rate can be computed using the spectral determinant, a function that incorporates the eigenvalues and periods of periodic orbits. The escape rate computed from periodic orbits converges to the true value as more and more periodic orbits are included. Escape from a given region of phase space can be computed by considering only periodic orbits that lie within the region. An accurate symbolic dynamics along with a corresponding partitioning of phase space is useful for systematically obtaining all periodic orbits up to a given period, to ensure that no important periodic orbits are missing in the computation. Homotopic lobe dynamics (HLD) is an automated technique for computing accurate partitions and symbolic dynamics for maps using the topological forcing of intersections of stable and unstable manifolds of a few periodic anchor orbits. In this study, we apply the HLD technique to compute symbolic dynamics and periodic orbits, which are then used to find escape rates from different regions of phase space for the Hénon map. We focus on computing escape rates in parameter ranges spanning hyperbolic plateaus, which are parameter intervals where the dynamics is hyperbolic and the symbolic dynamics does not change. After the periodic orbits are computed for a single parameter value within a hyperbolic plateau, periodic orbit continuation is used to compute periodic orbits over an interval that spans the hyperbolic plateau. The escape rates computed from a few thousand periodic orbits agree with escape rates computed from Monte Carlo simulations requiring hundreds of billions of orbits.
Lasnon, Charline; Quak, Elske; Briand, Mélanie; Gu, Zheng; Louis, Marie-Hélène; Aide, Nicolas
2013-01-17
The use of iodinated contrast media in small-animal positron emission tomography (PET)/computed tomography (CT) could improve anatomic referencing and tumor delineation but may introduce inaccuracies in the attenuation correction of the PET images. This study evaluated the diagnostic performance and accuracy of quantitative values in contrast-enhanced small-animal PET/CT (CEPET/CT) as compared to unenhanced small animal PET/CT (UEPET/CT). Firstly, a NEMA NU 4-2008 phantom (filled with 18F-FDG or 18F-FDG plus contrast media) and a homemade phantom, mimicking an abdominal tumor surrounded by water or contrast media, were used to evaluate the impact of iodinated contrast media on the image quality parameters and accuracy of quantitative values for a pertinent-sized target. Secondly, two studies in 22 abdominal tumor-bearing mice and rats were performed. The first animal experiment studied the impact of a dual-contrast media protocol, comprising the intravenous injection of a long-lasting contrast agent mixed with 18F-FDG and the intraperitoneal injection of contrast media, on tumor delineation and the accuracy of quantitative values. The second animal experiment compared the diagnostic performance and quantitative values of CEPET/CT versus UEPET/CT by sacrificing the animals after the tracer uptake period and imaging them before and after intraperitoneal injection of contrast media. There was minimal impact on IQ parameters (%SDunif and spillover ratios in air and water) when the NEMA NU 4-2008 phantom was filled with 18F-FDG plus contrast media. In the homemade phantom, measured activity was similar to true activity (-0.02%) and overestimated by 10.30% when vials were surrounded by water or by an iodine solution, respectively. The first animal experiment showed excellent tumor delineation and a good correlation between small-animal (SA)-PET and ex vivo quantification (r2 = 0.87, P < 0.0001). The second animal experiment showed a good correlation between CEPET/CT and UEPET/CT quantitative values (r2 = 0.99, P < 0.0001). Receiver operating characteristic analysis demonstrated better diagnostic accuracy of CEPET/CT versus UEPET/CT (senior researcher, area under the curve (AUC) 0.96 versus 0.77, P = 0.004; junior researcher, AUC 0.78 versus 0.58, P = 0.004). The use of iodinated contrast media for small-animal PET imaging significantly improves tumor delineation and diagnostic performance, without significant alteration of SA-PET quantitative accuracy and NEMA NU 4-2008 IQ parameters.
Characterizing focal hepatic lesions by free-breathing intravoxel incoherent motion MRI at 3.0 T.
Watanabe, Haruo; Kanematsu, Masayuki; Goshima, Satoshi; Kajita, Kimihiro; Kawada, Hiroshi; Noda, Yoshifumi; Tatahashi, Yukichi; Kawai, Nobuyuki; Kondo, Hiroshi; Moriyama, Noriyuki
2014-12-01
Diffusion-weighted (DW) imaging is commonly used to distinguish between benign and malignant liver lesions. To prospectively evaluate the true molecular-diffusion coefficient (D), perfusion-related diffusion coefficient (D*), perfusion fraction (f), and ADC of focal hepatic lesions using a free-breathing intravoxel incoherent motion (IVIM) DW sequence, and to determine if these parameters are useful for characterizing focal hepatic lesions. One hundred and twenty hepatic lesions (34 metastases, 32 hepatocellular carcinoma [HCC], 33 hemangiomas, and 21 liver cysts) in 74 patients were examined. Mean D, D*, f, and ADC values of hepatic lesions were compared among pathologies. ROC curve analyses were performed to assess the performances of D, D*, f, and ADC values for the characterization of liver lesions as benign or malignant. The mean D and ADC values of benign lesions were greater than those of malignant lesions (P < 0.001). Although the mean D and ADC values of liver cysts were greater than those of hemangiomas (P < 0.001), and these values were not significantly different between metastases and HCCs (P = 0.99). Area under the ROC curve for ADC values (0.98) was significantly greater (P = 0.048) than that for D values (0.96) for the differentiation of benign and malignant lesions. Sensitivity and specificity for the detection of malignant lesion were 89% and 98%, respectively, when an ADC cut-off value of 1.40 was applied. D and ADC values have more potential for characterizing focal hepatic lesions than D* or f values, and for the differentiation of malignancy and benignity. © The Foundation Acta Radiologica 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Zheng, S; Lin, R J; Chan, Y H; Ngan, C C L
2018-03-01
There is no clear consensus on the diagnosis of neurosyphilis. The Venereal Disease Research Laboratory (VDRL) test from cerebrospinal fluid (CSF) has traditionally been considered the gold standard for diagnosing neurosyphilis but is widely known to be insensitive. In this study, we compared the clinical and laboratory characteristics of true-positive VDRL-CSF cases with biological false-positive VDRL-CSF cases. We retrospectively identified cases of true and false-positive VDRL-CSF across a 3-year period received by the Immunology and Serology Laboratory, Singapore General Hospital. A biological false-positive VDRL-CSF is defined as a reactive VDRL-CSF with a non-reactive Treponema pallidum particle agglutination (TPPA)-CSF and/or negative Line Immuno Assay (LIA)-CSF IgG. A true-positive VDRL-CSF is a reactive VDRL-CSF with a concordant reactive TPPA-CSF and/or positive LIA-CSF IgG. During the study period, a total of 1254 specimens underwent VDRL-CSF examination. Amongst these, 60 specimens from 53 patients tested positive for VDRL-CSF. Of the 53 patients, 42 (79.2%) were true-positive cases and 11 (20.8%) were false-positive cases. In our setting, a positive non-treponemal serology has 97.6% sensitivity, 100% specificity, 100% positive predictive value and 91.7% negative predictive value for a true-positive VDRL-CSF based on our laboratory definition. HIV seropositivity was an independent predictor of a true-positive VDRL-CSF. Biological false-positive VDRL-CSF is common in a setting where patients are tested without first establishing a serological diagnosis of syphilis. Serological testing should be performed prior to CSF evaluation for neurosyphilis. © 2017 European Academy of Dermatology and Venereology.
Whitson, Bryan A; Groth, Shawn S; Odell, David D; Briones, Eleazar P; Maddaus, Michael A; D'Cunha, Jonathan; Andrade, Rafael S
2013-05-01
Mediastinal staging in patients with non-small cell lung cancer (NSCLC) with endobronchial ultrasound-guided fine-needle aspiration (EBUS-FNA) requires a high negative predictive value (NPV) (ie, low false negative rate). We provide a conservative calculation of NPV that calls for caution in the interpretation of EBUS results. We retrospectively analyzed our prospectively gathered database (January 2007 to November 2011) to include NSCLC patients who underwent EBUS-FNA for mediastinal staging. We excluded patients with metastatic NSCLC and other malignancies. We assessed FNAs with rapid on-site evaluation (ROSE). The calculation of NPV is NPV = true negatives/true negatives + false negatives. However, this definition ignores nondiagnostic samples. Nondiagnostic samples should be added to the NPV denominator because decisions based on nondiagnostic samples could be flawed. We conservatively calculated NPV for EBUS-FNA as NPV = true negatives/true negatives + false negatives + nondiagnostic. We defined false negatives as negative FNAs but NSCLC-positive surgical biopsy of the same site. Nondiagnostic FNAs were nonrepresentative of lymphoid tissue. We compared diagnostic performance with the inclusion and exclusion of nondiagnostic procedures. We studied 120 patients with NSCLC who underwent EBUS-FNA; 5 patients had false negative findings and 10 additional patients had nondiagnostic results. The NPV with and without inclusion of nondiagnostic samples was 65.9% and 85.3%, respectively. The inclusion of nondiagnostic specimens into the conservative, worst-case-scenario calculation of NPV for EBUS-FNA in NSCLC lowers the NPV from 85.3% to 65.9%. The true NPV is likely higher than 65.9% as few nondiagnostic specimens are false negatives. Caution is imperative for the safe application of EBUS-FNA in NSCLC staging. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Liberal rationalism and medical decision-making.
Savulescu, Julian
1997-04-01
I contrast Robert Veatch's recent liberal vision of medical decision-making with a more rationalist liberal model. According to Veatch, physicians are biased in their determination of what is in their patient's overall interests in favour of their medical interests. Because of the extent of this bias, we should abandon the practice of physicians offering what they guess to be the best treatment option. Patients should buddy up with physicians who share the same values -- 'deep value pairing'. The goal of choice is maximal promotion of patient values. I argue that if subjectivism about value and valuing is true, this move is plausible. However, if objectivism about value is true -- that there really are states which are good for people regardless of whether they desire to be in them -- then we should accept a more rationalist liberal alternative. According to this alternative, what is required to decide which course is best is rational dialogue between physicians and patients, both about the patient's circumstances and her values, and not the seeking out of people, physicians or others, who share the same values. Rational discussion requires that physicians be reasonable and empathic. I describe one possible account of a reasonable physician.
48 CFR 1516.303-74 - Determining the value of in-kind contributions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Determining the value of in-kind contributions. 1516.303-74 Section 1516.303-74 Federal Acquisition Regulations System... depreciation, if any) at the time of donation. If the booked costs reflect unrealistic values when compared to...
29 CFR 2.14 - Proceedings in which the Department balances conflicting values.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Proceedings in which the Department balances conflicting values. 2.14 Section 2.14 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.14 Proceedings in which the Department balances conflicting values. In...
NASA Astrophysics Data System (ADS)
Simon, M.; Dolinar, S.
2005-08-01
A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.
True detection limits in an experimental linearly heteroscedastic system. Part 1
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Using a lab-constructed laser-excited filter fluorimeter deliberately designed to exhibit linearly heteroscedastic, additive Gaussian noise, it has been shown that accurate estimates may be made of the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD) for the detection of rhodamine 6 G tetrafluoroborate in ethanol. The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.1 mV, YD = 125. mV, XC = 0.132 μg /mL and XD = 0.294 μg /mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158. mV and XD = 0.372 μg /mL. These decision levels and corresponding detection limits were shown to pass the ultimate test: they resulted in observed probabilities of false positives and false negatives that were statistically equivalent to the a priori specified values.
NASA Astrophysics Data System (ADS)
Li, Xiao-Dong; Park, Changbom; Sabiu, Cristiano G.; Park, Hyunbae; Cheng, Cheng; Kim, Juhan; Hong, Sungwook E.
2017-08-01
We develop a methodology to use the redshift dependence of the galaxy 2-point correlation function (2pCF) across the line of sight, ξ ({r}\\perp ), as a probe of cosmological parameters. The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. This geometrical distortion can be observed as a redshift-dependent rescaling in the measured ξ ({r}\\perp ). We test this methodology using a sample of 1.75 billion mock galaxies at redshifts 0, 0.5, 1, 1.5, and 2, drawn from the Horizon Run 4 N-body simulation. The shape of ξ ({r}\\perp ) can exhibit a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. Other contributions, including the gravitational growth of structure, galaxy bias, and the redshift space distortions, do not produce large redshift evolution in the shape. We show that one can make use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This method could be applicable to future large-scale structure surveys, especially photometric surveys such as DES and LSST, to derive tight cosmological constraints. This work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
[Diagnosis of neonatal metabolic acidosis by eucapnic pH determination].
Racinet, C; Richalet, G; Corne, C; Faure, P; Peresse, J-F; Leverve, X
2013-09-01
The identification of a metabolic acidosis is a key criterion for establishing a causal relationship between fetal perpartum asphyxia and neonatal encephalopathy and/or cerebral palsy. The diagnostic criteria currently used (pH and base deficit or lactatemia) are imprecise and non-specific. The study aimed to determine among a low-risk cohort of infants born at term (n = 867), the best diagnostic tool of metabolic acidosis in the cordonal from the following parameters: pH, blood gases and lactate values at birth. The data were obtained from arterial blood of the umbilical cord by a blood gas analyser. The parameter best predicting metabolic analysis was estimated from the partial correlations established between the most relevant parameters. The results showed a slight change in all parameters compared to adult values: acidemia (pH: 7.28 ± 0.01), hypercapnia (56.5 ± 1.59 mmHg) and hyperlactatemia (3.4 ± 0.05 mmol/L). From partial correlation analysis, pCO(2) emerged to be the main contributor of acidemia, while lactatemia was shown to be non-specific for metabolic acidosis. Seven cases (0.81 %) showed a pH less than 7.00 with marked hypercapnia. The correction of this respiratory component by EISENBERG's method led to the eucapnic pH, classifying six out of seven cases as exclusive respiratory acidosis. It has been demonstrated that the criteria from ACOG-AAP for defining a metabolic acidosis are incomplete, imprecise and generating errors in excess. The same is true for lactatemia, whose physiological significance has been completely revised, challenging the misconception of lactic acidosis as a specific marker of hypoxia. It appeared that eucapnic pH was the best way for obtaining a reliable diagnosis of metabolic acidosis. We proposed to adopt a simple decision scheme for determining whether a metabolic acidosis has occurred in case of acidemia less than 7.00. Copyright © 2013. Published by Elsevier SAS.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, J.; Hoversten, G.M.
2011-09-15
Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less
A variant of sparse partial least squares for variable selection and data exploration.
Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina
2014-01-01
When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
The social costs of dangerous products: an empirical investigation.
Shapiro, Sidney; Ruttenberg, Ruth; Leigh, Paul
2009-01-01
Defective consumer products impose significant costs on consumers and third parties when they cause fatalities and injuries. This Article develops a novel approach to measuring the true extent of such costs, which may not be accurately captured under current methods of estimating the cost of dangerous products. Current analysis rests on a narrowly defined set of costs, excluding certain types of costs. The cost-of-injury estimates utilized in this Article address this omission by quantifying and incorporating these costs to provide a more complete picture of the true impact of defective consumer products. The new estimates help to gauge the true value of the civil liability system.
Ferrero, Carmen; Massuelle, Danielle; Jeannerat, Damien; Doelker, Eric
2013-09-10
The two main purposes of this work were: (i) to critically consider the use of thermodynamic parameters of activation for elucidating the drug release mechanism from hydroxypropyl methylcellulose (HPMC) matrices, and (ii) to examine the effect of neutral (pH 6) and acidic (pH 2) media on the release mechanism. For this, caffeine was chosen as model drug and various processes were investigated for the effect of temperature and pH: caffeine diffusion in solution and HPMC gels, and drug release from and water penetration into the HPMC tablets. Generally, the kinetics of the processes was not significantly affected by pH. As for the temperature dependence, the activation energy (E(a)) values calculated from caffeine diffusivities were in the range of Fickian transport (20-40 kJ mol⁻¹). Regarding caffeine release from HPMC matrices, fitting the profiles using the Korsmeyer-Peppas model would indicate anomalous transport. However, the low apparent E(a) values obtained were not compatible with a swelling-controlled mechanism and can be assigned to the dimensional change of the system during drug release. Unexpectedly, negative apparent E(a) values were calculated for the water uptake process, which can be ascribed to the exothermic dissolution of water into the initially dry HPMC, the expansion of the matrix and the polymer dissolution. Taking these contributions into account, the true E(a) would fall into the range valid for Fickian diffusion. Consequently, a relaxation-controlled release mechanism can be dismissed. The apparent anomalous drug release from HPMC matrices results from a coupled Fickian diffusion-erosion mechanism, both at pH 6 and 2. Copyright © 2013 Elsevier B.V. All rights reserved.
Nieuwland, Mante S; Kuperberg, Gina R
2008-12-01
Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., "In moderation, drinking red wine isn't bad/good..."), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., "A baby bunny's fur isn't very hard/soft..."). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures
NASA Astrophysics Data System (ADS)
Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar
2017-02-01
A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.
Understanding Time-driven Activity-based Costing.
Sharan, Alok D; Schroeder, Gregory D; West, Michael E; Vaccaro, Alexander R
2016-03-01
Transitioning to a value-based health care system will require providers to increasingly scrutinize their outcomes and costs. Although there has been a great deal of effort to understand outcomes, cost accounting in health care has been a greater challenge. Currently the cost accounting methods used by hospitals and providers are based off a fee-for-service system. As resources become increasingly scarce and the health care system attempts to understand which services provide the greatest value, it will be critically important to understand the true costs of delivering a service. An understanding of the true costs of a particular service will help providers make smarter decisions on how to allocate and utilize resources as well as determine which activities are nonvalue added. Achieving value will require providers to have a greater focus on accurate outcome data as well as better methods of cost accounting.
Assessing predation risk: optimal behaviour and rules of thumb.
Welton, Nicky J; McNamara, John M; Houston, Alasdair I
2003-12-01
We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.
A mathematical model for adaptive transport network in path finding by true slime mold.
Tero, Atsushi; Kobayashi, Ryo; Nakagaki, Toshiyuki
2007-02-21
We describe here a mathematical model of the adaptive dynamics of a transport network of the true slime mold Physarum polycephalum, an amoeboid organism that exhibits path-finding behavior in a maze. This organism possesses a network of tubular elements, by means of which nutrients and signals circulate through the plasmodium. When the organism is put in a maze, the network changes its shape to connect two exits by the shortest path. This process of path-finding is attributed to an underlying physiological mechanism: a tube thickens as the flux through it increases. The experimental evidence for this is, however, only qualitative. We constructed a mathematical model of the general form of the tube dynamics. Our model contains a key parameter corresponding to the extent of the feedback regulation between the thickness of a tube and the flux through it. We demonstrate the dependence of the behavior of the model on this parameter.
Required number of records for ASCE/SEI 7 ground-motion scaling procedure
Reyes, Juan C.; Kalkan, Erol
2011-01-01
The procedures and criteria in 2006 IBC (International Council of Building Officials, 2006) and 2007 CBC (International Council of Building Officials, 2007) for the selection and scaling ground-motions for use in nonlinear response history analysis (RHA) of structures are based on ASCE/SEI 7 provisions (ASCE, 2005, 2010). According to ASCE/SEI 7, earthquake records should be selected from events of magnitudes, fault distance, and source mechanisms that comply with the maximum considered earthquake, and then scaled so that the average value of the 5-percent-damped response spectra for the set of scaled records is not less than the design response spectrum over the period range from 0.2Tn to 1.5Tn sec (where Tn is the fundamental vibration period of the structure). If at least seven ground-motions are analyzed, the design values of engineering demand parameters (EDPs) are taken as the average of the EDPs determined from the analyses. If fewer than seven ground-motions are analyzed, the design values of EDPs are taken as the maximum values of the EDPs. ASCE/SEI 7 requires a minimum of three ground-motions. These limits on the number of records in the ASCE/SEI 7 procedure are based on engineering experience, rather than on a comprehensive evaluation. This study statistically examines the required number of records for the ASCE/SEI 7 procedure, such that the scaled records provide accurate, efficient, and consistent estimates of" true" structural responses. Based on elastic-perfectly-plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI 7 scaling procedure is applied to 480 sets of ground-motions. The number of records in these sets varies from three to ten. The records in each set were selected either (i) randomly, (ii) considering their spectral shapes, or (iii) considering their spectral shapes and design spectral-acceleration value, A(Tn). As compared to benchmark (that is, "true") responses from unscaled records using a larger catalog of ground-motions, it is demonstrated that the ASCE/SEI 7 scaling procedure is overly conservative if fewer than seven ground-motions are employed. Utilizing seven or more randomly selected records provides a more accurate estimate of the EDPs accompanied by reduced record-to-record variability of the responses. Consistency in accuracy and efficiency is achieved only if records are selected on the basis of their spectral shape and A(Tn).
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
Conceptual definition of porosity function for coarse granular porous media with fixed texture
NASA Astrophysics Data System (ADS)
Shokri, Morteza
2018-06-01
Porous media's porosity value is commonly taken as a constant for a given granular texture free from any type of imposed loads. Although such definition holds for those media at hydrostatic equilibrium, it might not be hydrodynamically true for media subjected to the flow of fluids. This article casts light on an alternative vision describing porosity as a function of fluid velocity, though the media's solid skeleton does not undergo any changes and remain essentially intact. Carefully planned laboratory experiments support such as hypothesis and may help reducing reported disagreements between observed and actual behaviors of nonlinear flow regimes. Findings indicate that the so-called Stephenson relationship that enables estimating actual flow velocity is a case that holds true only for the Darcian conditions. In order to investigate the relationship, an accurate permeability should be measured. An alternative relationship, therefore, has been proposed to estimate actual pore flow velocity. On the other hand, with introducing the novel concept of effective porosity, that should be determined not only based on geotechnical parameters, but also it has to be regarded as a function of the flow regime. Such a porosity may be affected by the flow regime through variations in the effective pore volume and effective shape factor. In a numerical justification of findings, it is shown that unsatisfactory results, obtained from nonlinear mathematical models of unsteady flow, may be due to unreliable porosity estimates.
Conclusion of LOD-score analysis for family data generated under two-locus models.
Dizier, M. H.; Babron, M. C.; Clerget-Darpoux, F.
1996-01-01
The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. PMID:8651311
Completed Beltrami-Michell formulation for analyzing mixed boundary value problems in elasticity
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Kaljevic, Igor; Hopkins, Dale A.; Saigal, Sunil
1995-01-01
In elasticity, the method of forces, wherein stress parameters are considered as the primary unknowns, is known as the Beltrami-Michell formulation (BMF). The existing BMF can only solve stress boundary value problems; it cannot handle the more prevalent displacement of mixed boundary value problems of elasticity. Therefore, this formulation, which has restricted application, could not become a true alternative to the Navier's displacement method, which can solve all three types of boundary value problems. The restrictions in the BMF have been alleviated by augmenting the classical formulation with a novel set of conditions identified as the boundary compatibility conditions. This new method, which completes the classical force formulation, has been termed the completed Beltrami-Michell formulation (CBMF). The CBMF can solve general elasticity problems with stress, displacement, and mixed boundary conditions in terms of stresses as the primary unknowns. The CBMF is derived from the stationary condition of the variational functional of the integrated force method. In the CBMF, stresses for kinematically stable structures can be obtained without any reference to the displacements either in the field or on the boundary. This paper presents the CBMF and its derivation from the variational functional of the integrated force method. Several examples are presented to demonstrate the applicability of the completed formulation for analyzing mixed boundary value problems under thermomechanical loads. Selected example problems include a cylindrical shell wherein membrane and bending responses are coupled, and a composite circular plate.
The volatility of stock market prices.
Shiller, R J
1987-01-02
If the volatility of stock market prices is to be understood in terms of the efficient markets hypothesis, then there should be evidence that true investment value changes through time sufficiently to justify the price changes. Three indicators of change in true investment value of the aggregate stock market in the United States from 1871 to 1986 are considered: changes in dividends, in real interest rates, and in a direct measure of intertemporal marginal rates of substitution. Although there are some ambiguities in interpreting the evidence, dividend changes appear to contribute very little toward justifying the observed historical volatility of stock prices. The other indicators contribute some, but still most of the volatility of stock market prices appears unexplained.
Carbon dioxide stripping in aquaculture. part 1: terminology and reporting
Colt, John; Watten, Barnaby; Pfeiffer, Tim
2012-01-01
The removal of carbon dioxide gas in aquacultural systems is much more complex than for oxygen or nitrogen gas because of liquid reactions of carbon dioxide and their kinetics. Almost all published carbon dioxide removal information for aquaculture is based on the apparent removal value after the CO2(aq) + HOH ⇔ H2CO3 reaction has reached equilibrium. The true carbon dioxide removal is larger than the apparent value, especially for high alkalinities and seawater. For low alkalinity freshwaters (<2000 μeq/kg), the difference between the true and apparent removal is small and can be ignored for many applications. Analytical and reporting standards are recommended to improve our understanding of carbon dioxide removal.
Survival analysis of the high energy channel of BATSE
NASA Astrophysics Data System (ADS)
Balázs, L. G.; Bagoly, Z.; Horváth, I.; Mészáros, A.
2004-06-01
We used Kaplan-Meier (KM) survival analysis to study the true distribution of high energy (F4) fluences on BATSE. The measured values were divided into two classes: A. if F4 exceeded the 3σ of the noise level we accepted the measured value as 'true event'. B. We treated 3σ as an upper bound if F4 did not exceeded it and identified those data as 'censored'. KM analysis were made for short (t90 < 2 s) and long (t90 > 2 s) bursts, separately. Comparison of the calculated probability distribution functions of the two groups indicated about an order of magnitude difference in the > 300 keV part of the energies released.
Research and implementation of group animation based on normal cloud model
NASA Astrophysics Data System (ADS)
Li, Min; Wei, Bin; Peng, Bao
2011-12-01
Group Animation is a difficult technology problem which always has not been solved in computer Animation technology, All current methods have their limitations. This paper put forward a method: the Motion Coordinate and Motion Speed of true fish group was collected as sample data, reverse cloud generator was designed and run, expectation, entropy and super entropy are gotten. Which are quantitative value of qualitative concept. These parameters are used as basis, forward cloud generator was designed and run, Motion Coordinate and Motion Speed of two-dimensional fish group animation are produced, And two spirit state variable about fish group : the feeling of hunger, the feeling of fear are designed. Experiment is used to simulated the motion state of fish Group Animation which is affected by internal cause and external cause above, The experiment shows that the Group Animation which is designed by this method has strong Realistic.
LFsGRB: Binary neutron star merger rate via the luminosity function of short gamma-ray bursts
NASA Astrophysics Data System (ADS)
Paul, Debdutta
2018-04-01
LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.
NASA Astrophysics Data System (ADS)
Metzger, C.; Jansson, P.-E.; Lohila, A.; Aurela, M.; Eickenscheidt, T.; Belelli-Marchesini, L.; Dinsmore, K. J.; Drewer, J.; van Huissteden, J.; Drösler, M.
2014-06-01
The carbon dioxide (CO2) exchange of five different peatland systems across Europe with a wide gradient in landuse intensity, water table depth, soil fertility and climate was simulated with the process oriented CoupModel. The aim of the study was to find out to what extent CO2 fluxes measured at different sites, can be explained by common processes and parameters implemented in the model. The CoupModel was calibrated to fit measured CO2 fluxes, soil temperature, snow depth and leaf area index (LAI) and resulting differences in model parameters were analysed. Finding site independent model parameters would mean that differences in the measured fluxes could be explained solely by model input data: water table, meteorological data, management and soil inventory data. The model, utilizing a site independent configuration for most of the parameters, captured seasonal variability in the major fluxes well. Parameters that differed between sites included the rate of soil organic decomposition, photosynthetic efficiency, and regulation of the mobile carbon (C) pool from senescence to shooting in the next year. The largest difference between sites was the rate coefficient for heterotrophic respiration. Setting it to a common value would lead to underestimation of mean total respiration by a factor of 2.8 up to an overestimation by a factor of 4. Despite testing a wide range of different responses to soil water and temperature, heterotrophic respiration rates were consistently lowest on formerly drained sites and highest on the managed sites. Substrate decomposability, pH and vegetation characteristics are possible explanations for the differences in decomposition rates. Applying common parameter values for the timing of plant shooting and senescence, and a minimum temperature for photosynthesis, had only a minor effect on model performance, even though the gradient in site latitude ranged from 48° N (South-Germany) to 68° N (northern Finland). This was also true for common parameters defining the moisture and temperature response for decomposition. CoupModel is able to describe measured fluxes at different sites or under different conditions, providing that the rate of soil organic decomposition, photosynthetic efficiency, and the regulation of the mobile carbon (C) pool are estimated from available information on specific soil conditions, vegetation and management of the ecosystems.
7 CFR 1007.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1007.60 Section 1007.60... Handling Uniform Prices § 1007.60 Handler's value of milk. For the purpose of computing a handler's... balancing fund pursuant to § 1007.82. [64 FR 47966, Sept. 1, 1999, as amended at 65 FR 82835, Dec. 28, 2000...
Nonstandard neutrino interactions at DUNE, T2HK and T2HKK
Liao, Jiajun; Marfatia, Danny; Whisnant, Kerry
2017-01-17
Here, we study the matter effect caused by nonstandard neutrino interactions (NSI) in the next generation long-baseline neutrino experiments, DUNE, T2HK and T2HKK. If multiple NSI parameters are nonzero, the potential of these experiments to detect CP violation, determine the mass hierarchy and constrain NSI is severely impaired by degeneracies between the NSI parameters and by the generalized mass hierarchy degeneracy. In particular, a cancellation between leading order terms in the appearance channels when ϵ eτ= cot θ 23ϵ eμ, strongly affects the sensitivities to these two NSI parameters at T2HK and T2HKK. We also study the dependence of themore » sensitivities on the true CP phase and the true mass hierarchy, and find that overall DUNE has the best sensitivity to the magnitude of the NSI parameters, while T2HKK has the best sensitivity to CP violation whether or not there are NSI. Furthermore, for T2HKK a smaller off-axis angle for the Korean detector is better overall. We find that due to the structure of the leading order terms in the appearance channel probabilities, the NSI sensitivities in a given experiment are similar for both mass hierarchies, modulo the phase change δ→δ + 180°.« less
Nonstandard neutrino interactions at DUNE, T2HK and T2HKK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Jiajun; Marfatia, Danny; Whisnant, Kerry
Here, we study the matter effect caused by nonstandard neutrino interactions (NSI) in the next generation long-baseline neutrino experiments, DUNE, T2HK and T2HKK. If multiple NSI parameters are nonzero, the potential of these experiments to detect CP violation, determine the mass hierarchy and constrain NSI is severely impaired by degeneracies between the NSI parameters and by the generalized mass hierarchy degeneracy. In particular, a cancellation between leading order terms in the appearance channels when ϵ eτ= cot θ 23ϵ eμ, strongly affects the sensitivities to these two NSI parameters at T2HK and T2HKK. We also study the dependence of themore » sensitivities on the true CP phase and the true mass hierarchy, and find that overall DUNE has the best sensitivity to the magnitude of the NSI parameters, while T2HKK has the best sensitivity to CP violation whether or not there are NSI. Furthermore, for T2HKK a smaller off-axis angle for the Korean detector is better overall. We find that due to the structure of the leading order terms in the appearance channel probabilities, the NSI sensitivities in a given experiment are similar for both mass hierarchies, modulo the phase change δ→δ + 180°.« less
A method to measure the presampling MTF in digital radiography using Wiener deconvolution
NASA Astrophysics Data System (ADS)
Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui
2013-03-01
We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.
Thermal conductivity of silicon using reverse non-equilibrium molecular dynamics
NASA Astrophysics Data System (ADS)
El-Genk, Mohamed S.; Talaat, Khaled; Cowen, Benjamin J.
2018-05-01
Simulations are performed using the reverse non-equilibrium molecular dynamics (rNEMD) method and the Stillinger-Weber (SW) potential to determine the input parameters for achieving ±1% convergence of the calculated thermal conductivity of silicon. These parameters are then used to investigate the effects of the interatomic potentials of SW, Tersoff II, Environment Dependent Interatomic Potential (EDIP), Second Nearest Neighbor, Modified Embedded-Atom Method (MEAM), and Highly Optimized Empirical Potential MEAM on determining the bulk thermal conductivity as a function of temperature (400-1000 K). At temperatures > 400 K, data collection and swap periods of 15 ns and 150 fs, system size ≥6 × 6 UC2 and system lengths ≥192 UC are adequate for ±1% convergence with all potentials, regardless of the time step size (0.1-0.5 fs). This is also true at 400 K, except for the SW potential, which requires a data collection period ≥30 ns. The calculated bulk thermal conductivities using the rNEMD method and the EDIP potential are close to, but lower than experimental values. The 10% difference at 400 K increases gradually to 20% at 1000 K.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N
2017-01-01
Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.
Correction of the post -- necking true stress -- strain data using instrumented nanoindentation
NASA Astrophysics Data System (ADS)
Romero Fonseca, Ivan Dario
The study of large plastic deformations has been the focus of numerous studies particularly in the metal forming processes and fracture mechanics fields. A good understanding of the plastic flow properties of metallic alloys and the true stresses and true strains induced during plastic deformation is crucial to optimize the aforementioned processes, and to predict ductile failure in fracture mechanics analyzes. Knowledge of stresses and strains is extracted from the true stress-strain curve of the material from the uniaxial tensile test. In addition, stress triaxiality is manifested by the neck developed during the last stage of a tensile test performed on a ductile material. This necking phenomenon is the factor responsible for deviating from uniaxial state into a triaxial one, then, providing an inaccurate description of the material's behavior after the onset of necking. The research of this dissertation is aimed at the development of a correction method for the nonuniform plastic deformation (post-necking) portion of the true stress-strain curve. The correction proposed is based on the well-known relationship between hardness and flow (yield) stress, except that instrumented nanoindentation hardness is utilized rather than conventional macro or micro hardness. Three metals with different combinations of strain hardening behavior and crystal structure were subjected to quasi-static tensile tests: power-law strain hardening low carbon G10180 steel (BCC) and electrolytic tough pitch copper C11000 (FCC), and linear strain hardening austenitic stainless steel S30400 (FCC). Nanoindentation hardness values, measured on the broken tensile specimen, were converted into flow stress values by means of the constraint factor C from Tabor's, the representative plastic strainepsilonr and the post-test true plastic strains measured. Micro Vickers hardness testing was carried out on the sample as well. The constraint factors were 5.5, 4.5 and 4.5 and the representative plastic strains were 0.028, 0.062 and 0.061 for G101800, C11000 and S30400 respectively. The established corrected curves relating post-necking flow stress to true plastic strain turned out to be well represented by a power-law function. Experimental results dictated that a unique single value for C and for epsilonr is not appropriate to describe materials with different plastic behaviors. Therefore, Tabor's equation, along with the representative plastic strain concept, has been misused in the past. The studied materials exhibited different nanohardness and plastic strain distributions due to their inherently distinct elasto-plastic response. The proposed post-necking correction separates out the effect of triaxiality on the uniaxial true stress-strain curve provided that the nanohardness-flow stress relationship is based on uniaxial values of stress. Some type of size effect, due to the microvoids at the tip of the neck, influenced nanohardness measurements. The instrumented nanoindentation technique proved to be a very suitable method to probe elasto-plastic properties of materials such as nanohardness, elastic modulus, and quasi-static strain rate sensitivity among others. Care should be taken when converting nanohardness to Vickers and vice versa due to their different area definition used. Nanohardness to Vickers ratio oscillated between 1.01 and 1.17.
Investigating Response from Turbulent Boundary Layer Excitations on a Real Launch Vehicle using SEA
NASA Technical Reports Server (NTRS)
Harrison, Phillip; LaVerde,Bruce; Teague, David
2009-01-01
Statistical Energy Analysis (SEA) response has been fairly well anchored to test observations for Diffuse Acoustic Field (DAF) loading by others. Meanwhile, not many examples can be found in the literature anchoring the SEA vehicle panel response results to Turbulent Boundary Layer (TBL) fluctuating pressure excitations. This deficiency is especially true for supersonic trajectories such as those required by this nation s launch vehicles. Space Shuttle response and excitation data recorded from vehicle flight measurements during the development flights were used in a trial to assess the capability of the SEA tool to predict similar responses. Various known/measured inputs were used. These were supplemented with a range of assumed values in order to cover unknown parameters of the flight. This comparison is presented as "Part A" of the study. A secondary, but perhaps more important, objective is to provide more clarity concerning the accuracy and conservatism that can be expected from response estimates of TBL-excited vehicle models in SEA (Part B). What range of parameters must be included in such an analysis in order to land on the conservative side in response predictions? What is the sensitivity of changes in these input parameters on the results? The TBL fluid structure loading model used for this study is provided by the SEA module of the commercial code VA One.
Study of Far—Field Directivity Pattern for Linear Arrays
NASA Astrophysics Data System (ADS)
Ana-Maria, Chiselev; Luminita, Moraru; Laura, Onose
2011-10-01
A model to calculate directivity pattern in far field is developed in this paper. Based on this model, the three-dimensional beam pattern is introduced and analyzed in order to investigate geometric parameters of linear arrays and their influences on the directivity pattern. Simulations in azimuthal plane are made to highlight the influence of transducers parameters, including number of elements and inter-element spacing. It is true that these parameters are important factors that influence the directivity pattern and the appearance of side-lobes for linear arrays.
New true-triaxial rock strength criteria considering intrinsic material characteristics
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Cheng; Quan, Xiaowei; Wang, Yanning; Yu, Liyuan; Jiang, Binsong
2018-02-01
A reasonable strength criterion should reflect the hydrostatic pressure effect, minimum principal stress effect, and intermediate principal stress effect. The former two effects can be described by the meridian curves, and the last one mainly depends on the Lode angle dependence function. Among three conventional strength criteria, i.e. Mohr-Coulomb (MC), Hoek-Brown (HB), and Exponent (EP) criteria, the difference between generalized compression and extension strength of EP criterion experience a firstly increase then decrease process, and tends to be zero when hydrostatic pressure is big enough. This is in accordance with intrinsic rock strength characterization. Moreover, the critical hydrostatic pressure I_c corresponding to the maximum difference of between generalized compression and extension strength can be easily adjusted by minimum principal stress influence parameter K. So, the exponent function is a more reasonable meridian curves, which well reflects the hydrostatic pressure effect and is employed to describe the generalized compression and extension strength. Meanwhile, three Lode angle dependence functions of L_{{MN}}, L_{{WW}}, and L_{{YMH}}, which unconditionally satisfy the convexity and differential requirements, are employed to represent the intermediate principal stress effect. Realizing the actual strength surface should be located between the generalized compression and extension surface, new true-triaxial criteria are proposed by combining the two states of EP criterion by Lode angle dependence function with a same lode angle. The proposed new true-triaxial criteria have the same strength parameters as EP criterion. Finally, 14 groups of triaxial test data are employed to validate the proposed criteria. The results show that the three new true-triaxial exponent criteria, especially the Exponent Willam-Warnke criterion (EPWW) criterion, give much lower misfits, which illustrates that the EP criterion and L_{{WW}} have more reasonable meridian and deviatoric function form, respectively. The proposed new true-triaxial strength criteria can provide theoretical foundation for stability analysis and optimization of support design of rock engineering.
A Flight Control System for Small Unmanned Aerial Vehicle
NASA Astrophysics Data System (ADS)
Tunik, A. A.; Nadsadnaya, O. I.
2018-03-01
The program adaptation of the controller for the flight control system (FCS) of an unmanned aerial vehicle (UAV) is considered. Linearized flight dynamic models depend mainly on the true airspeed of the UAV, which is measured by the onboard air data system. This enables its use for program adaptation of the FCS over the full range of altitudes and velocities, which define the flight operating range. FCS with program adaptation, based on static feedback (SF), is selected. The SF parameters for every sub-range of the true airspeed are determined using the linear matrix inequality approach in the case of discrete systems for synthesis of a suboptimal robust H ∞-controller. The use of the Lagrange interpolation between true airspeed sub-ranges provides continuous adaptation. The efficiency of the proposed approach is shown against an example of the heading stabilization system.
Knipling, Edward B.; Kramer, Paul J.
1967-01-01
The dye method for measuring water potential was examined and compared with the thermocouple psychrometer method in order to evaluate its usefulness for measuring leaf water potentials of forest trees and common laboratory plants. Psychrometer measurements are assumed to represent the true leaf water potentials. Because of the contamination of test solutions by cell sap and leaf surface residues, dye method values of most species varied about 1 to 5 bars from psychrometer values over the leaf water potential range of 0 to −30 bars. The dye method is useful for measuring changes and relative values in leaf potential. Because of species differences in the relationships of dye method values to true leaf water potentials, dye method values should be interpreted with caution when comparing different species or the same species growing in widely different environments. Despite its limitations the dye method has a usefulness to many workers because it is simple, requires no elaborate equipment, and can be used in both the laboratory and field. PMID:16656657
NASA Astrophysics Data System (ADS)
Moustris, Konstantinos; Tsiros, Ioannis X.; Tseliou, Areti; Nastos, Panagiotis
2018-04-01
The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.
Moustris, Konstantinos; Tsiros, Ioannis X; Tseliou, Areti; Nastos, Panagiotis
2018-04-11
The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.
2014-04-11
Fig. 9(a) and (b). In addition, the temperature dependencies of the true and room-temperature-based mean values of the linear thermal expansion ...Variation of (a) thermal conductivity, (b) specific heat, (c) true linear thermal expansion coefficient, and (d) room-temperature-based mean thermal ...defined as follows: (a) alloy-grade and thermal -mechanical treatment of the workpiece materials to be joined, (b) frequency of reciprocating motion
Aquifer Characterization from Surface Geo-electrical Method, western coast of Maharashtra, India
NASA Astrophysics Data System (ADS)
DAS, A.; Maiti, D. S.
2017-12-01
Knowledge of aquifer parameters are necessary for managing groundwater amenity. These parameters are evaluated through pumping tests bring off from bore wells. But it is quite expensive as well as time consuming to carry out pumping tests at various sites and sometimes it is difficult to find bore hole at every required site. Therefore, an alternate method is put forward in which the aquifer parameters are evaluated from surface geophysical method. In this method, vertical electrical sounding (VES) with Schlumberger configuration were accomplished in 85 stations over Sindhudurg district. Sindhudurg district is located in the Konkan region of Maharashtra state, India. The district is located between north latitude 15°37' and 16° 40' and east longitude 73° 19' and 74° 13'. The area is having hard rock and acute groundwater problem. In this configuration, we have taken the maximum current electrode spacing of 200 m for every vertical electrical sounding (VES). Geo-electrical sounding data (true resistivity and thickness) is interpreted through resistivity inversion approach. The required parameters are achieved through resistivity inversion technique from which the aquifer variables (D-Z parameters, mean resistivity, hydraulic conductivity, transmissivity, and coefficient of anisotropy) are calculated by using some empirical formulae. Cross-correlation investigation has been done between these parameters, which eventually used to characterize the aquifer over the study area. At the end, the contour plot for these aquifer parameters has been raised which reveals the detailed distribution of aquifer parameters throughout the study area. From contour plot, high values of longitudinal conductance, hydraulic conductivity and transmissivity are demarcate over Kelus, Vengurle, Mochemar and Shiroda villages. This may be due to intrusion of saline water from Arabian sea. From contour trends, the aquifers are characterized from which the groundwater resources could be assess and manage properly in western Maharashtra. The current method which include DC resistivity inversion could be applicable further in hydrological characterization in tangled coastal parts of India.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
False-Positive Rate of AKI Using Consensus Creatinine–Based Criteria
Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G.S.; Negoianu, Dan; Testani, Jeffrey M.; Berns, Jeffrey S.; Parikh, Chirag R.
2015-01-01
Background and objectives Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false–positive rates because of inherent laboratory and biologic variabilities of creatinine. Design, setting, participants, & measurements We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false–positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient’s true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Results Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false–positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%–8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false–positive AKI diagnosis rate of 30.5% (interquartile range =30.1%–30.9%) versus 2.0% (interquartile range =1.9%–2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Conclusions Use of small serum creatinine changes to diagnose AKI is limited by high false–positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. PMID:26336912
False-Positive Rate of AKI Using Consensus Creatinine-Based Criteria.
Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G S; Negoianu, Dan; Testani, Jeffrey M; Berns, Jeffrey S; Parikh, Chirag R; Wilson, F Perry
2015-10-07
Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false-positive rates because of inherent laboratory and biologic variabilities of creatinine. We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false-positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient's true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false-positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%-8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false-positive AKI diagnosis rate of 30.5% (interquartile range =30.1%-30.9%) versus 2.0% (interquartile range =1.9%-2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Use of small serum creatinine changes to diagnose AKI is limited by high false-positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. Copyright © 2015 by the American Society of Nephrology.
Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler
2016-01-01
This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.
7 CFR 1001.60 - Handler's value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Handler's value of milk. 1001.60 Section 1001.60 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE MILK IN THE NORTHEAST MARKETING AREA Order Regulating...
Measurement of stress-strain behaviour of human hair fibres using optical techniques.
Lee, J; Kwon, H J
2013-06-01
Many studies have presented stress-strain relationship of human hair, but most of them have been based on an engineering stress-strain curve, which is not a true representation of stress-strain behaviour. In this study, a more accurate 'true' stress-strain curve of human hair was determined by applying optical techniques to the images of the hair deformed under tension. This was achieved by applying digital image cross-correlation (DIC) to 10× magnified images of hair fibres taken under increasing tension to estimate the strain increments. True strain was calculated by summation of the strain increments according to the theoretical definition of 'true' strain. The variation in diameter with the increase in longitudinal elongation was also measured from the 40× magnified images to estimate the Poisson's ratio and true stress. By combining the true strain and the true stress, a true stress-strain curve could be determined, which demonstrated much higher stress values than the conventional engineering stress-strain curve at the same degree of deformation. Four regions were identified in the true stress-strain relationship and empirical constitutive equations were proposed for each region. Theoretical analysis on the necking condition using the constitutive equations provided the insight into the failure mechanism of human hair. This analysis indicated that local thinning caused by necking does not occur in the hair fibres, but, rather, relatively uniform deformation takes place until final failure (fracture) eventually occurs. © 2012 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
NASA Astrophysics Data System (ADS)
Bean, Jacob L.; McArthur, Barbara E.; Benedict, G. Fritz; Harrison, Thomas E.; Bizyaev, Dmitry; Nelan, Edmund; Smith, Verne V.
2007-08-01
We have determined a dynamical mass for the companion to HD 33636 that indicates it is a low-mass star instead of an exoplanet. Our result is based on an analysis of Hubble Space Telescope (HST) astrometry and ground-based radial velocity data. We have obtained high-cadence radial velocity measurements spanning 1.3 yr of HD 33636 with the Hobby-Eberly Telescope at McDonald Observatory. We combined these data with previously published velocities to create a data set that spans 9 yr. We used this data set to search for, and place mass limits on, the existence of additional companions in the HD 33636 system. Our high-precision astrometric observations of the system with the HST Fine Guidance Sensor 1r span 1.2 yr. We simultaneously modeled the radial velocity and astrometry data to determine the parallax, proper motion, and perturbation orbit parameters of HD 33636. Our derived parallax, πabs=35.6+/-0.2 mas, agrees within the uncertainties with the Hipparcos value. We find a perturbation period P=2117.3+/-0.8 days, semimajor axis aA=14.2+/-0.2 mas, and system inclination i=4.1deg+/-0.1deg. Assuming the mass of the primary star to be MA=1.02+/-0.03 Msolar, we obtain a companion mass MB=142+/-11 MJup=0.14+/-0.01 Msolar. The much larger true mass of the companion relative to its minimum mass estimated from the spectroscopic orbit parameters (Msini=9.3 MJup) is due to the nearly face-on orbit orientation. This result demonstrates the value of follow-up astrometric observations to determine the true masses of exoplanet candidates detected with the radial velocity method. Based on data obtained with the NASA/ESA Hubble Space Telescope (HST) and the Hobby-Eberly Telescope (HET). The HST observations were obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The HET is a joint project of the University of Texas at Austin, Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universität Müenchen, and Georg-August-Universität Göttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.
26 CFR 20.2032A-4 - Method of valuing farm real property.
Code of Federal Regulations, 2014 CFR
2014-04-01
... property types. Only rentals from tracts of comparable farm property which are rented solely for an amount... affects efficient management and use of property and value per se; and (10) Availability of, and type of... 26 Internal Revenue 14 2014-04-01 2013-04-01 true Method of valuing farm real property. 20.2032A-4...
The Heuristic Value of p in Inductive Statistical Inference
Krueger, Joachim I.; Heck, Patrick R.
2017-01-01
Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say. PMID:28649206
The Heuristic Value of p in Inductive Statistical Inference.
Krueger, Joachim I; Heck, Patrick R
2017-01-01
Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duane, Greg; Tsonis, Anastasios; Kocarev, Ljupco
This collaborative reserach has several components but the main idea is that when imperfect copies of a given nonlinear dynamical system are coupled, they may synchronize for some set of coupling parameters. This idea is to be tested for several IPCC-like models each one with its own formulation and representing an “imperfect” copy of the true climate system. By computing the coupling parameters, which will lead the models to a synchronized state, a consensus on climate change simulations may be achieved.
VarBin, a novel method for classifying true and false positive variants in NGS data
2013-01-01
Background Variant discovery for rare genetic diseases using Illumina genome or exome sequencing involves screening of up to millions of variants to find only the one or few causative variant(s). Sequencing or alignment errors create "false positive" variants, which are often retained in the variant screening process. Methods to remove false positive variants often retain many false positive variants. This report presents VarBin, a method to prioritize variants based on a false positive variant likelihood prediction. Methods VarBin uses the Genome Analysis Toolkit variant calling software to calculate the variant-to-wild type genotype likelihood ratio at each variant change and position divided by read depth. The resulting Phred-scaled, likelihood-ratio by depth (PLRD) was used to segregate variants into 4 Bins with Bin 1 variants most likely true and Bin 4 most likely false positive. PLRD values were calculated for a proband of interest and 41 additional Illumina HiSeq, exome and whole genome samples (proband's family or unrelated samples). At variant sites without apparent sequencing or alignment error, wild type/non-variant calls cluster near -3 PLRD and variant calls typically cluster above 10 PLRD. Sites with systematic variant calling problems (evident by variant quality scores and biases as well as displayed on the iGV viewer) tend to have higher and more variable wild type/non-variant PLRD values. Depending on the separation of a proband's variant PLRD value from the cluster of wild type/non-variant PLRD values for background samples at the same variant change and position, the VarBin method's classification is assigned to each proband variant (Bin 1 to Bin 4). Results To assess VarBin performance, Sanger sequencing was performed on 98 variants in the proband and background samples. True variants were confirmed in 97% of Bin 1 variants, 30% of Bin 2, and 0% of Bin 3/Bin 4. Conclusions These data indicate that VarBin correctly classifies the majority of true variants as Bin 1 and Bin 3/4 contained only false positive variants. The "uncertain" Bin 2 contained both true and false positive variants. Future work will further differentiate the variants in Bin 2. PMID:24266885
NASA Technical Reports Server (NTRS)
Dooling, Robert J.
2012-01-01
NASA Engineering's Orion Script Generator (OSG) is a program designed to run on Exploration Flight Test One Software. The script generator creates a SuperScript file that, when run, accepts the filename for a listing of Compact Unique Identifiers (CUIs). These CUIs will correspond to different variables on the Orion spacecraft, such as the temperature of a component X, the active or inactive status of another component Y, and so on. OSG will use a linked database to retrieve the value for each CUI, such as "100 05," "True," and so on. Finally, OSG writes SuperScript code to display each of these variables before outputting the ssi file that allows recipients to view a graphical representation of Orion Flight Test One's status through these variables. This project's main challenge was creating flexible software that accepts and transfers many types of data, from Boolean (true or false) values to "Unsigned Long Long'' values (any number from 0 to 18,446,744,073,709,551,615). We also needed to allow bit manipulation for each variable, requiring us to program functions that could convert any of the multiple types of data into binary code. Throughout the project, we explored different methods to optimize the speed of working with the CUI database and long binary numbers. For example, the program handled extended binary numbers much more efficiently when we stored them as collections of Boolean values (true or false representing 1 or 0) instead of as collections of character strings or numbers. We also strove to make OSG as user-friendly and accommodating of different needs as possible its default behavior is to display a current CUI's maximum value and minimum value with three to five intermediate values in between, all in descending order. Fortunately, users can also add other input on the same lines as each CUI name to request different high values, low values, display options (ascending, sine, and so on), and interval sizes for generating intermediate values. Developing input validation took up quite a bit of time, but OSG's flexibility in the end was worth it.
New approach to calculate the true-coincidence effect of HpGe detector
NASA Astrophysics Data System (ADS)
Alnour, I. A.; Wagiran, H.; Ibrahim, N.; Hamzah, S.; Siong, W. B.; Elias, M. S.
2016-01-01
The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using 57Co, 60Co, 133Ba and 137Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGe detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.
Anderman, Evan R.; Hill, Mary Catherine
2001-01-01
Observations of the advective component of contaminant transport in steady-state flow fields can provide important information for the calibration of ground-water flow models. This report documents the Advective-Transport Observation (ADV2) Package, version 2, which allows advective-transport observations to be used in the three-dimensional ground-water flow parameter-estimation model MODFLOW-2000. The ADV2 Package is compatible with some of the features in the Layer-Property Flow and Hydrogeologic-Unit Flow Packages, but is not compatible with the Block-Centered Flow or Generalized Finite-Difference Packages. The particle-tracking routine used in the ADV2 Package duplicates the semi-analytical method of MODPATH, as shown in a sample problem. Particles can be tracked in a forward or backward direction, and effects such as retardation can be simulated through manipulation of the effective-porosity value used to calculate velocity. Particles can be discharged at cells that are considered to be weak sinks, in which the sink applied does not capture all the water flowing into the cell, using one of two criteria: (1) if there is any outflow to a boundary condition such as a well or surface-water feature, or (2) if the outflow exceeds a user specified fraction of the cell budget. Although effective porosity could be included as a parameter in the regression, this capability is not included in this package. The weighted sum-of-squares objective function, which is minimized in the Parameter-Estimation Process, was augmented to include the square of the weighted x-, y-, and z-components of the differences between the simulated and observed advective-front locations at defined times, thereby including the direction of travel as well as the overall travel distance in the calibration process. The sensitivities of the particle movement to the parameters needed to minimize the objective function are calculated for any particle location using the exact sensitivity-equation approach; the equations are derived by taking the partial derivatives of the semi-analytical particle-tracking equation with respect to the parameters. The ADV2 Package is verified by showing that parameter estimation using advective-transport observations produces the true parameter values in a small but complicated test case when exact observations are used. To demonstrate how the ADV2 Package can be used in practice, a field application is presented. In this application, the ADV2 Package is used first in the Sensitivity-Analysis mode of MODFLOW-2000 to calculate measures of the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Cape Cod, Massachusetts. The ADV2 Package is then used in the Parameter-Estimation mode of MODFLOW-2000 to determine best-fit parameter values. It is concluded that, for this problem, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and the use of formal parameter-estimation methods and related techniques produced significant insight into the physical system.
Plescia, Carolina; De Sio, Lorenzo
2018-01-01
Ecological inference refers to the study of individuals using aggregate data and it is used in an impressive number of studies; it is well known, however, that the study of individuals using group data suffers from an ecological fallacy problem (Robinson in Am Sociol Rev 15:351-357, 1950). This paper evaluates the accuracy of two recent methods, the Rosen et al. (Stat Neerl 55:134-156, 2001) and the Greiner and Quinn (J R Stat Soc Ser A (Statistics in Society) 172:67-81, 2009) and the long-standing Goodman's (Am Sociol Rev 18:663-664, 1953; Am J Sociol 64:610-625, 1959) method designed to estimate all cells of R × C tables simultaneously by employing exclusively aggregate data. To conduct these tests we leverage on extensive electoral data for which the true quantities of interest are known. In particular, we focus on examining the extent to which the confidence intervals provided by the three methods contain the true values. The paper also provides important guidelines regarding the appropriate contexts for employing these models.
p-Curve and p-Hacking in Observational Research.
Bruns, Stephan B; Ioannidis, John P A
2016-01-01
The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.
Improving the accuracy of macromolecular structure refinement at 7 Å resolution.
Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F
2012-06-06
In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Romanov, I. S.; Prudaev, I. A.; Brudnyi, V. N.
2018-05-01
The results of an investigation of Mg diffusion in blue LED structures with InGaN/GaN quantum wells are presented for various growth temperatures of the p-GaN layer. The values of the diffusion coefficient estimated for true growth temperatures of 860, 910, and 980°C were 7.5·10-17, 2.8·10-16, and 1.2·10-15 cm2/s, respectively. The temperature values given in the work were measured on the surface of the growing layer in situ using a pyrometer. The calculated activation energy for the temperature dependence of the diffusion coefficient was 2.8 eV.
Clinically relevant hypoglycemia prediction metrics for event mitigation.
Harvey, Rebecca A; Dassau, Eyal; Zisser, Howard C; Bevier, Wendy; Seborg, Dale E; Jovanovič, Lois; Doyle, Francis J
2012-08-01
The purpose of this study was to develop a method to compare hypoglycemia prediction algorithms and choose parameter settings for different applications, such as triggering insulin pump suspension or alerting for rescue carbohydrate treatment. Hypoglycemia prediction algorithms with different parameter settings were implemented on an ambulatory dataset containing 490 days from 30 subjects with type 1 diabetes mellitus using the Dexcom™ (San Diego, CA) SEVEN™ continuous glucose monitoring system. The performance was evaluated using a proposed set of metrics representing the true-positive ratio, false-positive rate, and distribution of warning times. A prospective, in silico study was performed to show the effect of using different parameter settings to prevent or rescue from hypoglycemia. The retrospective study results suggest the parameter settings for different methods of hypoglycemia mitigation. When rescue carbohydrates are used, a high true-positive ratio, a minimal false-positive rate, and alarms with short warning time are desired. These objectives were met with a 30-min prediction horizon and two successive flags required to alarm: 78% of events were detected with 3.0 false alarms/day and 66% probability of alarms occurring within 30 min of the event. This parameter setting selection was confirmed in silico: treating with rescue carbohydrates reduced the duration of hypoglycemia from 14.9% to 0.5%. However, for a different method, such as pump suspension, this parameter setting only reduced hypoglycemia to 8.7%, as can be expected by the low probability of alarming more than 30 min ahead. The proposed metrics allow direct comparison of hypoglycemia prediction algorithms and selection of parameter settings for different types of hypoglycemia mitigation, as shown in the prospective in silico study in which hypoglycemia was alerted or treated with rescue carbohydrates.
Hunt, Megan M; Meng, Guoliang; Rancourt, Derrick E; Gates, Ian D; Kallos, Michael S
2014-01-01
Traditional optimization of culture parameters for the large-scale culture of human embryonic stem cells (ESCs) as aggregates is carried out in a stepwise manner whereby the effect of varying each culture parameter is investigated individually. However, as evidenced by the wide range of published protocols and culture performance indicators (growth rates, pluripotency marker expression, etc.), there is a lack of systematic investigation into the true effect of varying culture parameters especially with respect to potential interactions between culture variables. Here we describe the design and execution of a two-parameter, three-level (3(2)) factorial experiment resulting in nine conditions that were run in duplicate 125-mL stirred suspension bioreactors. The two parameters investigated here were inoculation density and agitation rate, which are easily controlled, but currently, poorly characterized. Cell readouts analyzed included fold expansion, maximum density, and exponential growth rate. Our results reveal that the choice of best case culture parameters was dependent on which cell property was chosen as the primary output variable. Subsequent statistical analyses via two-way analysis of variance indicated significant interaction effects between inoculation density and agitation rate specifically in the case of exponential growth rates. Results indicate that stepwise optimization has the potential to miss out on the true optimal case. In addition, choosing an optimum condition for a culture output of interest from the factorial design yielded similar results when repeated with the same cell line indicating reproducibility. We finally validated that human ESCs remain pluripotent in suspension culture as aggregates under our optimal conditions and maintain their differentiation capabilities as well as a stable karyotype and strong expression levels of specific human ESC markers over several passages in suspension bioreactors.
Alternative Strategies for Pricing Home Work Time.
ERIC Educational Resources Information Center
Zick, Cathleen D.; Bryant, W. Keith
1983-01-01
Discusses techniques for measuring the value of home work time. Estimates obtained using the reservation wage technique are contrasted with market alternative estimates derived with the same data set. Findings suggest that the market alternative cost method understates the true value of a woman's home time to the household. (JOW)
7 CFR 1955.113 - Price (housing).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 14 2010-01-01 2009-01-01 true Price (housing). 1955.113 Section 1955.113 Agriculture... § 1955.113 Price (housing). Real property will be offered or listed for its present market value, as adjusted by any administrative price reductions provided for in this section. Market value will be based...
Predicting lumber volume and value of young-growth true firs: user's guide.
Susan Ernst; W.Y. Pong
1982-01-01
Equations are presented for predicting the volume and value of young-growth red, white, and grand firs. Examples of how to use them are also given. These equations were developed on trees less than 140 years old from areas in southern Oregon, northern California, and Idaho.
32 CFR 644.353 - Determination of values for reporting.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Determination of values for reporting. 644.353 Section 644.353 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
32 CFR 644.353 - Determination of values for reporting.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 4 2014-07-01 2013-07-01 true Determination of values for reporting. 644.353 Section 644.353 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
32 CFR 644.353 - Determination of values for reporting.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 4 2012-07-01 2011-07-01 true Determination of values for reporting. 644.353 Section 644.353 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
Tennis Rackets and the Parallel Axis Theorem
ERIC Educational Resources Information Center
Christie, Derek
2014-01-01
This simple experiment uses an unusual graph straightening exercise to confirm the parallel axis theorem for an irregular object. Along the way, it estimates experimental values for g and the moment of inertia of a tennis racket. We use Excel to find a 95% confidence interval for the true values.
19 CFR 141.85 - Pro forma invoice.
Code of Federal Regulations, 2011 CFR
2011-04-01
... purchased by me. The prices, or in the case of consigned goods the values, given below are true and correct... paid or agreed to be paid (_) as per order dated ______. (b) Advices from exporter by letter (—) by... States (if U.S. Value) (_) ______. (f) Advices of the Port Director (_) ______. (g) Other (_) ______. A...
Guidelines for Interpreting and Reporting Subscores
ERIC Educational Resources Information Center
Feinberg, Richard A.; Jurich, Daniel P.
2017-01-01
Recent research has proposed a criterion to evaluate the reportability of subscores. This criterion is a value-added ratio ("VAR"), where values greater than 1 suggest that the true subscore is better approximated by the observed subscore than by the total score. This research extends the existing literature by quantifying statistical…
NASA Astrophysics Data System (ADS)
Boss, Andreas; Martirosian, Petros; Artunc, Ferruh; Risler, Teut; Claussen, Claus D.; Schlemmer, Heinz-Peter; Schick, Fritz
2007-03-01
Purpose: As the MR contrast-medium gadobutrol is completely eliminated via glomerular filtration, the glomerular filtration rate (GFR) can be quantified after bolus-injection of gadobutrol and complete mixing in the extracellular fluid volume (ECFV) by measuring the signal decrease within the liver parenchyma. Two different navigator-gated single-shot saturation-recovery sequences have been tested for suitability of GFR quantification: a TurboFLASH and a TrueFISP readout technique. Materials and Methods: Ten healthy volunteers (mean age 26.1+/-3.6) were equally devided in two subgroups. After bolus-injection of 0.05 mmol/kg gadobutrol, coronal single-slice images of the liver were recorded every 4-5 seconds during free breathing using either the TurboFLASH or the TrueFISP technique. Time-intensity curves were determined from manually drawn regions-of-interest over the liver parenchyma. Both sequences were subsequently evaluated regarding signal to noise ratio (SNR) and the behaviour of signal intensity curves. The calculated GFR values were compared to an iopromide clearance gold standard. Results: The TrueFISP sequence exhibited a 3.4-fold higher SNR as compared to the TurboFLASH sequence and markedly lower variability of the recorded time-intensity curves. The calculated mean GFR values were 107.0+/-16.1 ml/min/1.73m2 (iopromide: 92.1+/-14.5 ml/min/1.73m2) for the TrueFISP technique and 125.6+/-24.1 ml/min/1.73m2 (iopromide: 97.7+/-6.3 ml/min/1.73m2) for the TurboFLASH approach. The mean paired differences with TrueFISP was lower (15.0 ml/min/1.73m2) than in the TurboFLASH method (27.9 ml/min/1.73m2). Conclusion: The global GFR can be quantified via measurement of gadobutrol clearance from the ECFV. A saturation-recovery TrueFISP sequence allows for more reliable GFR quantification as a saturation recovery TurboFLASH technique.
NASA Astrophysics Data System (ADS)
Richardson, I. G.; von Rosenvinge, T. T.; Cane, H. V.
2013-12-01
The existence of a correlation between the intensity of solar energetic proton (SEP) events and the speed of the associated coronal mass ejection near the Sun is well known, and is often interpreted as evidence for particle acceleration at CME-driven shocks. However, this correlation is far from perfect and might be improved by taking other parameters into consideration (e.g., CME width). In studies of cycle 23 SEP events, values of CME speed, width and other parameters were typically taken from the CDAWWeb LASCO CME catalog. This is compiled 'by hand' from examination of LASCO images by experienced observers. Other automated LASCO CME catalogs have now been developed, e.g., CACTUS (Royal Observatory of Belgium) and SEEDS (George Mason University), but the basic CME parameters do not always agree with those from the CDAWweb catalog since they are not determined in the same way. For example the 'CME speed' might be measured at a specific position angle against the plane of the sky in one catalog, or be the average of speeds taken along the CME front in another. Speeds may also be based on linear or higher order fits to the coronagraph images. There will also be projection effects in these plane of the sky speeds. Similarly, CME widths can vary between catalogs and are dependent on how they are defined. For example, the CDAW catalog lists any CME that surrounds the occulting disk as a 'halo' (360 deg. width) CME even though the CME may be highly-asymmetric and originate from a solar event far from central meridian. Another catalog may give a smaller width for the same CME. The problem of obtaining the 'true' CME width is especially acute for assessing the relationship between CME width and SEP properties when using the CDAW catalog since a significant fraction, if not the majority, of the CMEs associated with major SEP events are reported to be halo CMEs. In principle, observations of CMEs from the STEREO A and B spacecraft, launched in late 2006, might be used to overcome some of these problems. In particular, a spacecraft in quadrature with the solar source of an SEP event should observe the 'true' width and speed of the associated CME. However, STEREO CME parameters are derived using the CACTUS method, and cannot be directly compared with the LASCO CDAW catalog values that have been so widely used for many years. In this study, we will examine the relationship between the properties of CMEs in various catalogs and the intensities of a large sample of particle events that include ˜25 MeV protons in cycles 23 and 24. In particular, we will compare the proton intensity-speed relationships obtained using the CDAW, CACTUS and SEEDS LASCO catalogs, and also using the CACTUS values from whichever spacecraft (STEREO A, B or SOHO) is best in quadrature with the solar event. We will also examine whether there is any correlation between the width of the CMEs in the automated catalogs and proton intensity, and whether a combination of CME speed and width might improve the correlation with proton intensity.
Nieuwland, Mante S.; Kuperberg, Gina R.
2011-01-01
Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like ‘not’. However, research studies have often confounded whether a statement is true and whether it is natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth-value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn’t bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny’s fur isn’t very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true. PMID:19121125
A general framework for updating belief distributions.
Bissiri, P G; Holmes, C C; Walker, S G
2016-11-01
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
Modeling the seasonal cycle of CO2 on Mars: A fit to the Viking lander pressure curves
NASA Technical Reports Server (NTRS)
Wood, S. E.; Paige, D. A.
1992-01-01
We have constructed a more accurate Mars thermal model, similar to the one used by Leighton and Murray in 1966, which solves radiative, conductive, and latent heat balance at the surface as well as the one-dimensional heat conduction equation for 40 layers to a depth of 15 meters every 1/36 of a Martian day. The planet is divided into 42 latitude bands with a resolution of two degrees near the poles and five degrees at lower latitudes, with elevations relative to the 6.1 mbar reference areoid. This estimate of the Martian zonally averaged topography was derived primarily from radio occultations. We show that a realistic one-dimensional thermal model is able to reproduce the VL1 pressure curve reasonably well without having to invoke complicated atmospheric effects such as dust storms and polar hoods. Although these factors may cause our deduced values for each model parameter to differ from its true value, we believe that this simple model can be used as a platform to study many aspects of the Martian CO2 cycle over seasonal, interannual, and long-term climate timescales.
[Estimation of uncertainty of measurement in clinical biochemistry].
Enea, Maria; Hristodorescu, Cristina; Schiriac, Corina; Morariu, Dana; Mutiu, Tr; Dumitriu, Irina; Gurzu, B
2009-01-01
The uncertainty of measurement (UM) or measurement uncertainty is known as the parameter associated with the result of a measurement. Repeated measurements usually reveal slightly different results for the same analyte, sometimes a little higher, sometimes a little lower, because the results of a measurement are depending not only by the analyte itself, but also, by a number of error factors that could give doubts about the estimate. The uncertainty of the measurement represent the quantitative, mathematically expression of this doubt. UM is a range of measured values which is probably to enclose the true value of the measured. Calculation of UM for all types of laboratories is regularized by the ISO Guide to the Expression of Uncertainty in Measurement (abbreviated GUM) and the SR ENV 13005 : 2003 (both recognized by European Accreditation). Even if the GUM rules about UM estimation are very strictly, the offering of the result together with UM will increase the confidence of customers (patients or physicians). In this study the authors are presenting the possibilities of UM assessing in labs from our country by using the data obtained in the procedures of methods validation, during the internal and external quality control.
Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li
2004-02-01
An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.
Establishing a sample-to cut-off ratio for lab-diagnosis of hepatitis C virus in Indian context.
Tiwari, Aseem K; Pandey, Prashant K; Negi, Avinash; Bagga, Ruchika; Shanker, Ajay; Baveja, Usha; Vimarsh, Raina; Bhargava, Richa; Dara, Ravi C; Rawat, Ganesh
2015-01-01
Lab-diagnosis of hepatitis C virus (HCV) is based on detecting specific antibodies by enzyme immuno-assay (EIA) or chemiluminescence immuno-assay (CIA). Center for Disease Control reported that signal-to-cut-off (s/co) ratios in anti-HCV antibody tests like EIA/CIA can be used to predict the probable result of supplemental test; above a certain s/co value it is most likely to be true-HCV positive result and below that certain s/co it is most likely to be false-positive result. A prospective study was undertaken in patients in tertiary care setting for establishing this "certain" s/co value. The study was carried out in consecutive patients requiring HCV testing for screening/diagnosis and medical management. These samples were tested for anti-HCV on CIA (VITROS(®) Anti-HCV assay, Ortho-Clinical Diagnostics, New Jersey) for calculating s/co value. The supplemental nucleic acid test used was polymerase chain reaction (PCR) (Abbott). PCR test results were used to define true negatives, false negatives, true positives, and false positives. Performance of different putative s/co ratios versus PCR was measured using sensitivity, specificity, positive predictive value and negative predictive value and most appropriate s/co was considered on basis of highest specificity at sensitivity of at least 95%. An s/co ratio of ≥6 worked out to be over 95% sensitive and almost 92% specific in 438 consecutive patient samples tested. The s/co ratio of six can be used for lab-diagnosis of HCV infection; those with s/co higher than six can be diagnosed to have HCV infection without any need for supplemental assays.
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
The True- and Eccentric-Anomaly Parameterizations of the Perturbed Kepler Motion
NASA Astrophysics Data System (ADS)
Gergely, László Á.; Perjés, Zoltán I.; Vasúth, Mátyás
2000-01-01
The true- and eccentric-anomaly parameterizations of the Kepler motion are generalized to quasi-periodic orbits, by considering perturbations of the radial part of the kinetic energy in the form of a series of negative powers of the orbital radius. A toolbox of methods for averaging observables as functions of the energy E and angular momentum L is developed. A broad range of systems governed by the generic Brumberg force, as well as recent applications in the theory of gravitational radiation, involve integrals of these functions over a period of motion. These integrals are evaluated by using the residue theorem. In the course of this work, two important questions emerge: (1) When do the true- and eccentric-anomaly parameters exist? (2) Under what circumstances, and why, are the poles in the origin? The purpose of this paper is to find the answer to these queries.