Sample records for error response curve

  1. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  2. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  3. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  4. Measurement error in environmental epidemiology and the shape of exposure-response curves.

    PubMed

    Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E

    2011-09-01

    Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.

  5. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  6. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  7. Preparation-induced errors in EPR dosimetry of enamel: pre- and post-crushing sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haskell, E.H.; Hayes, R.B.; Kenner, G.H.

    1996-01-01

    Errors in dose estimation as a function of grain size for tooth enamel has been previously shown for beta irradiation after crushing. We tested the effect of gamma radiation applied to specimens before and after crushing. We extend the previous work in that we found that post-crushing irradiation altered the slope of the dose-response curve of the hydroxyapatite signal and produced a grain-size dependent offset. No changes in the slope of the dose-response curve were seen in enamel caps irradiated before crushing.

  8. Longitudinal Growth Curves of Brain Function Underlying Inhibitory Control through Adolescence

    PubMed Central

    Foran, William; Velanova, Katerina; Luna, Beatriz

    2013-01-01

    Neuroimaging studies suggest that developmental improvements in inhibitory control are primarily supported by changes in prefrontal executive function. However, studies are contradictory with respect to how activation in prefrontal regions changes with age, and they have yet to analyze longitudinal data using growth curve modeling, which allows characterization of dynamic processes of developmental change, individual differences in growth trajectories, and variables that predict any interindividual variability in trajectories. In this study, we present growth curves modeled from longitudinal fMRI data collected over 302 visits (across ages 9 to 26 years) from 123 human participants. Brain regions within circuits known to support motor response control, executive control, and error processing (i.e., aspects of inhibitory control) were investigated. Findings revealed distinct developmental trajectories for regions within each circuit and indicated that a hierarchical pattern of maturation of brain activation supports the gradual emergence of adult-like inhibitory control. Mean growth curves of activation in motor response control regions revealed no changes with age, although interindividual variability decreased with development, indicating equifinality with maturity. Activation in certain executive control regions decreased with age until adolescence, and variability was stable across development. Error-processing activation in the dorsal anterior cingulate cortex showed continued increases into adulthood and no significant interindividual variability across development, and was uniquely associated with task performance. These findings provide evidence that continued maturation of error-processing abilities supports the protracted development of inhibitory control over adolescence, while motor response control regions provide early-maturing foundational capacities and suggest that some executive control regions may buttress immature networks as error processing continues to mature. PMID:24227721

  9. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  10. Reevaluation of the Amsterdam Inventory for Auditory Disability and Handicap Using Item Response Theory.

    PubMed

    Boeschen Hospers, J Mirjam; Smits, Niels; Smits, Cas; Stam, Mariska; Terwee, Caroline B; Kramer, Sophia E

    2016-04-01

    We reevaluated the psychometric properties of the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1995) using item response theory. Item response theory describes item functioning along an ability continuum. Cross-sectional data from 2,352 adults with and without hearing impairment, ages 18-70 years, were analyzed. They completed the AIADH in the web-based prospective cohort study "Netherlands Longitudinal Study on Hearing." A graded response model was fitted to the AIADH data. Category response curves, item information curves, and the standard error as a function of self-reported hearing ability were plotted. The graded response model showed a good fit. Item information curves were most reliable for adults who reported having hearing disability and less reliable for adults with normal hearing. The standard error plot showed that self-reported hearing ability is most reliably measured for adults reporting mild up to moderate hearing disability. This is one of the few item response theory studies on audiological self-reports. All AIADH items could be hierarchically placed on the self-reported hearing ability continuum, meaning they measure the same construct. This provides a promising basis for developing a clinically useful computerized adaptive test, where item selection adapts to the hearing ability of individuals, resulting in efficient assessment of hearing disability.

  11. Intra-arterial pressure measurement in neonates: dynamic response requirements.

    PubMed

    van Genderingen, H R; Gevers, M; Hack, W W

    1995-02-01

    A computer simulation of a catheter manometer system was used to quantify measurement errors in neonatal blood pressure parameters. Accurate intra-arterial pressure recordings of 21 critically ill newborns were fed into this simulated system. The dynamic characteristics, natural frequency and damping coefficient, were varied from 2.5 to 60 Hz and from 0.1 to 1.4, respectively. As a result, errors in systolic, diastolic and pulse arterial pressure were obtained as a function of natural frequency and damping coefficient. Iso-error curves for 2%, 5% and 10% were constructed. Using these curves, the maximum inaccuracy of any neonatal catheter manometer system can be determined and used in the clinical setting.

  12. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    NASA Astrophysics Data System (ADS)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  13. Differences in the accommodation stimulus response curves of adult myopes and emmetropes: a summary and update.

    PubMed

    Schmid, Katrina L; Strang, Niall C

    2015-11-01

    To provide a summary of the classic paper "Differences in the accommodation stimulus response curves of adult myopes and emmetropes" published in Ophthalmic and Physiological Optics in 1998 and to provide an update on the topic of accommodation errors in myopia. The accommodation responses of 33 participants (10 emmetropes, 11 early onset myopes and 12 late onset myopes) aged 18-31 years were measured using the Canon Autoref R-1 free space autorefractor using three methods to vary the accommodation demand: decreasing distance (4 m to 0.25 cm), negative lenses (0 to -4 D at 4 m) and positive lenses (+4 to 0 D at 0.25 m). We observed that the greatest accommodation errors occurred for the negative lens method whereas minimal errors were observed using positive lenses. Adult progressing myopes had greater lags of accommodation than stable myopes at higher demands induced by negative lenses. Progressing myopes had shallower response gradients than the emmetropes and stable myopes; however the reduced gradient was much less than that observed in children using similar methods. This paper has been often cited as evidence that accommodation responses at near may be primarily reduced in adults with progressing myopia and not in stable myopes and/or that challenging accommodation stimuli (negative lenses with monocular viewing) are required to generate larger accommodation errors. As an analogy, animals reared with hyperopic errors develop axial elongation and myopia. Retinal defocus signals are presumably passed to the retinal pigment epithelium and choroid and then ultimately the sclera to modify eye length. A number of lens treatments that act to slow myopia progression may partially work through reducing accommodation errors. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  14. Errors introduced by dose scaling for relative dosimetry

    PubMed Central

    Watanabe, Yoichi; Hayashi, Naoki

    2012-01-01

    Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658

  15. Note: Eddy current displacement sensors independent of target conductivity.

    PubMed

    Wang, Hongbo; Li, Wei; Feng, Zhihua

    2015-01-01

    Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.

  16. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  17. Estimating the impact of grouping misclassification on risk ...

    EPA Pesticide Factsheets

    Environmental health risk assessments of chemical mixtures that rely on component approaches often begin by grouping the chemicals of concern according to toxicological similarity. Approaches that assume dose addition typically are used for groups of similarly-acting chemicals and those that assume response addition are used for groups of independently acting chemicals. Grouping criteria for similarity can include a common adverse outcome pathway (AOP) and similarly shaped dose-response curves, with the latter used in the relative potency factor (RPF) method for estimating mixture response. Independence of toxic action is generally assumed if there is evidence that the chemicals act by different mechanisms. Several questions arise about the potential for misclassification error in the mixture risk prediction. If a common AOP has been established, how much error could there be if the same dose-response curve shape is assumed for all chemicals, when the shapes truly differ and, conversely, what is the error potential if different shapes are assumed when they are not? In particular, how do those concerns impact the choice of index chemical and uncertainty of the RPF-estimated mixture response? What is the quantitative impact if dose additivity is assumed when complete or partial independence actually holds and vice versa? These concepts and implications will be presented with numerical examples in the context of uncertainty of the RPF-estimated mixture response,

  18. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis

    PubMed Central

    Tabelow, Karsten; König, Reinhard; Polzehl, Jörg

    2016-01-01

    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809

  19. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.

    PubMed

    Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  20. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  1. Effects of antiepileptic drugs on learning as assessed by a repeated acquisition of response sequences task in rats.

    PubMed

    Shannon, Harlan E; Love, Patrick L

    2007-02-01

    Patients with epilepsy can have impaired cognitive abilities. Antiepileptic drugs (AEDs) may contribute to the cognitive deficits observed in patients with epilepsy, and have been shown to induce cognitive impairments in healthy individuals. However, there are few systematic data on the effects of AEDs on specific cognitive domains. We have previously demonstrated that a number of AEDs can impair working memory and attention. The purpose of the present study was to evaluate the effects of AEDs on learning as measured by a repeated acquisition of response sequences task in nonepileptic rats. The GABA-related AEDs phenobarbital and chlordiazepoxide significantly disrupted performance by shifting the learning curve to the right and increasing errors, whereas tiagabine and valproate did not. The sodium channel blockers carbamazepine and phenytoin suppressed responding at higher doses, whereas lamotrigine shifted the learning curve to the right and increased errors, and topiramate was without significant effect. Levetiracetam also shifted the learning curve to the right and increased errors. The disruptions produced by triazolam, chlordiazepoxide, lamotrigine, and levetiracetam were qualitatively similar to the effects of the muscarinic cholinergic receptor antagonist scopolamine. The present results indicate that AEDs can impair learning, but there are differences among AEDs in the magnitude of the disruption in nonepileptic rats, with drugs that enhance GABA receptor function and some that block sodium channels producing the most consistent impairment of learning.

  2. Minimally important change, measurement error, and responsiveness for the Self-Reported Foot and Ankle Score

    PubMed Central

    Cöster, Maria C; Nilsdotter, Anna; Brudin, Lars; Bremander, Ann

    2017-01-01

    Background and purpose Patient-reported outcome measures (PROMs) are increasingly used to evaluate results in orthopedic surgery. To enhance good responsiveness with a PROM, the minimally important change (MIC) should be established. MIC reflects the smallest measured change in score that is perceived as being relevant by the patients. We assessed MIC for the Self-reported Foot and Ankle Score (SEFAS) used in Swedish national registries. Patients and methods Patients with forefoot disorders (n = 83) or hindfoot/ankle disorders (n = 80) completed the SEFAS before surgery and 6 months after surgery. At 6 months also, a patient global assessment (PGA) scale—as external criterion—was completed. Measurement error was expressed as the standard error of a single determination. MIC was calculated by (1) median change scores in improved patients on the PGA scale, and (2) the best cutoff point (BCP) and area under the curve (AUC) using analysis of receiver operating characteristic curves (ROCs). Results The change in mean summary score was the same, 9 (SD 9), in patients with forefoot disorders and in patients with hindfoot/ankle disorders. MIC for SEFAS in the total sample was 5 score points (IQR: 2–8) and the measurement error was 2.4. BCP was 5 and AUC was 0.8 (95% CI: 0.7–0.9). Interpretation As previously shown, SEFAS has good responsiveness. The score change in SEFAS 6 months after surgery should exceed 5 score points in both forefoot patients and hindfoot/ankle patients to be considered as being clinically relevant. PMID:28464751

  3. An electrophysiological signal that precisely tracks the emergence of error awareness

    PubMed Central

    Murphy, Peter R.; Robertson, Ian H.; Allen, Darren; Hester, Robert; O'Connell, Redmond G.

    2012-01-01

    Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focused on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400 ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. PMID:22470332

  4. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  5. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  6. Methods for simulating nutritional requirement and response studies with all organisms to increase research efficiency.

    PubMed

    Vedenov, Dmitry; Alhotan, Rashed A; Wang, Runlian; Pesti, Gene M

    2017-02-01

    Nutritional requirements and responses of all organisms are estimated using various models representing the response to different dietary levels of the nutrient in question. To help nutritionists design experiments for estimating responses and requirements, we developed a simulation workbook using Microsoft Excel. The objective of the present study was to demonstrate the influence of different numbers of nutrient levels, ranges of nutrient levels and replications per nutrient level on the estimates of requirements based on common nutritional response models. The user provides estimates of the shape of the response curve, requirements and other parameters and observation to observation variation. The Excel workbook then produces 1-1000 randomly simulated responses based on the given response curve and estimates the standard errors of the requirement (and other parameters) from different models as an indication of the expected power of the experiment. Interpretations are based on the assumption that the smaller the standard error of the requirement, the more powerful the experiment. The user can see the potential effects of using one or more subjects, different nutrient levels, etc., on the expected outcome of future experiments. From a theoretical perspective, each organism should have some enzyme-catalysed reaction whose rate is limited by the availability of some limiting nutrient. The response to the limiting nutrient should therefore be similar to enzyme kinetics. In conclusion, the workbook eliminates some of the guesswork involved in designing experiments and determining the minimum number of subjects needed to achieve desired outcomes.

  7. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    ERIC Educational Resources Information Center

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  8. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  9. Isotonic Regression Based-Method in Quantitative High-Throughput Screenings for Genotoxicity

    PubMed Central

    Fujii, Yosuke; Narita, Takeo; Tice, Raymond Richard; Takeda, Shunich

    2015-01-01

    Quantitative high-throughput screenings (qHTSs) for genotoxicity are conducted as part of comprehensive toxicology screening projects. The most widely used method is to compare the dose-response data of a wild-type and DNA repair gene knockout mutants, using model-fitting to the Hill equation (HE). However, this method performs poorly when the observed viability does not fit the equation well, as frequently happens in qHTS. More capable methods must be developed for qHTS where large data variations are unavoidable. In this study, we applied an isotonic regression (IR) method and compared its performance with HE under multiple data conditions. When dose-response data were suitable to draw HE curves with upper and lower asymptotes and experimental random errors were small, HE was better than IR, but when random errors were big, there was no difference between HE and IR. However, when the drawn curves did not have two asymptotes, IR showed better performance (p < 0.05, exact paired Wilcoxon test) with higher specificity (65% in HE vs. 96% in IR). In summary, IR performed similarly to HE when dose-response data were optimal, whereas IR clearly performed better in suboptimal conditions. These findings indicate that IR would be useful in qHTS for comparing dose-response data. PMID:26673567

  10. Intra and inter-session reliability of rapid Transcranial Magnetic Stimulation stimulus-response curves of tibialis anterior muscle in healthy older adults.

    PubMed

    Peri, Elisabetta; Ambrosini, Emilia; Colombo, Vera Maria; van de Ruit, Mark; Grey, Michael J; Monticone, Marco; Ferriero, Giorgio; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Ferrante, Simona

    2017-01-01

    The clinical use of Transcranial Magnetic Stimulation (TMS) as a technique to assess corticospinal excitability is limited by the time for data acquisition and the measurement variability. This study aimed at evaluating the reliability of Stimulus-Response (SR) curves acquired with a recently proposed rapid protocol on tibialis anterior muscle of healthy older adults. Twenty-four neurologically-intact adults (age:55-75 years) were recruited for this test-retest study. During each session, six SR curves, 3 at rest and 3 during isometric muscle contractions at 5% of maximum voluntary contraction (MVC), were acquired. Motor Evoked Potentials (MEPs) were normalized to the maximum peripherally evoked response; the coil position and orientation were monitored with an optical tracking system. Intra- and inter-session reliability of motor threshold (MT), area under the curve (AURC), MEPmax, stimulation intensity at which the MEP is mid-way between MEPmax and MEPmin (I50), slope in I50, MEP latency, and silent period (SP) were assessed in terms of Standard Error of Measurement (SEM), relative SEM, Minimum Detectable Change (MDC), and Intraclass Correlation Coefficient (ICC). The relative SEM was ≤10% for MT, I50, latency and SP both at rest and 5%MVC, while it ranged between 11% and 37% for AURC, MEPmax, and slope. MDC values were overall quite large; e.g., MT required a change of 12%MSO at rest and 10%MSO at 5%MVC to be considered a real change. Inter-sessions ICC were >0.6 for all measures but slope at rest and MEPmax and latency at 5%MVC. Measures derived from SR curves acquired in <4 minutes are affected by similar measurement errors to those found with long-lasting protocols, suggesting that the rapid method is at least as reliable as the traditional methods. As specifically designed to include older adults, this study provides normative data for future studies involving older neurological patients (e.g. stroke survivors).

  11. Gradient, contact-free volume transfers minimize compound loss in dose-response experiments.

    PubMed

    Harris, David; Olechno, Joe; Datwani, Sammy; Ellson, Richard

    2010-01-01

    More accurate dose-response curves can be constructed by eliminating aqueous serial dilution of compounds. Traditional serial dilutions that use aqueous diluents can result in errors in dose-response values of up to 4 orders of magnitude for a significant percentage of a compound library. When DMSO is used as the diluent, the errors are reduced but not eliminated. The authors use acoustic drop ejection (ADE) to transfer different volumes of model library compounds, directly creating a concentration gradient series in the receiver assay plate. Sample losses and contamination associated with compound handling are therefore avoided or minimized, particularly in the case of less water-soluble compounds. ADE is particularly well suited for assay miniaturization, but gradient volume dispensing is not limited to miniaturized applications.

  12. A Stepwise Test Characteristic Curve Method to Detect Item Parameter Drift

    ERIC Educational Resources Information Center

    Guo, Rui; Zheng, Yi; Chang, Hua-Hua

    2015-01-01

    An important assumption of item response theory is item parameter invariance. Sometimes, however, item parameters are not invariant across different test administrations due to factors other than sampling error; this phenomenon is termed item parameter drift. Several methods have been developed to detect drifted items. However, most of the…

  13. Millennials Invading: Building Training for Today's Admissions Counselors

    ERIC Educational Resources Information Center

    Barnds, W. Kent

    2009-01-01

    As chief admissions officer at two small colleges, the author has been responsible, in part, for ensuring that entry-level admissions counselors are trained properly. He learned through trial and error, and has adapted his methods to be increasingly sensitive to the learning curve of new employees. His thoughts about training new admissions…

  14. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

  15. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  16. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  17. The multiple hop test: a discriminative or evaluative instrument for chronic ankle instability?

    PubMed

    Eechaute, Christophe; Bautmans, Ivan; De Hertogh, Willem; Vaes, Peter

    2012-05-01

    To determine whether the multiple hop test should be used as an evaluative or a discriminative instrument for chronic ankle instability (CAI). Blinded case-control study. : University research laboratory. Twenty-nine healthy subjects (21 men, 8 women, mean age 21.8 years) and 29 patients with CAI (17 men, 12 women, mean age 24.9 years) were selected. Subjects performed a multiple hop test and hopped on 10 different tape markers while trying to avoid any postural correction. Minimal detectable changes (MDC) of the number of balance errors, the time value, and the visual analog scale (VAS) score (perceived difficulty) were calculated as evaluative measures. For the discriminative properties, a receiver operating characteristic curve was determined and the area under curve (AUC), the sensitivity, specificity, diagnostic accuracy (DA), and likelihood ratios (LR) were calculated whether 1, 2, or 3 outcomes were positive. Based on their MDC, outcomes should, respectively, change by more than 7 errors (41%), 6 seconds (15%), and 27 mm (55%, VAS score) before considering it as a real change. Area under curves were, respectively, 79% (errors), 77% (time value), and 65% (VAS score). The most optimal cutoff point was, respectively, 13.5 errors, 35 seconds, and 32.5 mm. When 2 of 3 outcomes were positive, the sensitivity was 86%, the specificity was 79%, the DA was 83%, the positive LR was 4.2, and the negative LR was 0.17. The multiple hop test seems to be more a discriminative instrument for CAI, and its responsiveness needs to be demonstrated.

  18. Dynamic analysis of spiral bevel and hypoid gears with high-order transmission errors

    NASA Astrophysics Data System (ADS)

    Yang, J. J.; Shi, Z. H.; Zhang, H.; Li, T. X.; Nie, S. W.; Wei, B. Y.

    2018-03-01

    A new gear surface modification methodology based on curvature synthesis is proposed in this study to improve the transmission performance. The generated high-order transmission error (TE) for spiral bevel and hypoid gears is proved to reduce the vibration of geared-rotor system. The method is comprised of the following steps: Firstly, the fully conjugate gear surfaces with pinion flank modified according to the predesigned relative transmission movement are established based on curvature correction. Secondly, a 14-DOF geared-rotor system model considering backlash nonlinearity is used to evaluate the effect of different orders of TE on the dynamic performance a hypoid gear transmission system. For case study, numerical simulation is performed to illustrate the dynamic response of hypoid gear pair with parabolic, fourth-order and sixth-order transmission error derived. The results show that the parabolic TE curve has higher peak to peak amplitude compared to the other two types of TE. Thus, the excited dynamic response also shows larger amplitude at response peaks. Dynamic responses excited by fourth and sixth order TE also demonstrate distinct response components due to their different TE period which is expected to generate different sound quality or other acoustic characteristics.

  19. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  20. Accounting for sampling variability, injury under-reporting, and sensor error in concussion injury risk curves.

    PubMed

    Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B

    2015-09-18

    There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Intra and inter-session reliability of rapid Transcranial Magnetic Stimulation stimulus-response curves of tibialis anterior muscle in healthy older adults

    PubMed Central

    Colombo, Vera Maria; van de Ruit, Mark; Grey, Michael J.; Monticone, Marco; Ferriero, Giorgio; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Ferrante, Simona

    2017-01-01

    Objective The clinical use of Transcranial Magnetic Stimulation (TMS) as a technique to assess corticospinal excitability is limited by the time for data acquisition and the measurement variability. This study aimed at evaluating the reliability of Stimulus-Response (SR) curves acquired with a recently proposed rapid protocol on tibialis anterior muscle of healthy older adults. Methods Twenty-four neurologically-intact adults (age:55–75 years) were recruited for this test-retest study. During each session, six SR curves, 3 at rest and 3 during isometric muscle contractions at 5% of maximum voluntary contraction (MVC), were acquired. Motor Evoked Potentials (MEPs) were normalized to the maximum peripherally evoked response; the coil position and orientation were monitored with an optical tracking system. Intra- and inter-session reliability of motor threshold (MT), area under the curve (AURC), MEPmax, stimulation intensity at which the MEP is mid-way between MEPmax and MEPmin (I50), slope in I50, MEP latency, and silent period (SP) were assessed in terms of Standard Error of Measurement (SEM), relative SEM, Minimum Detectable Change (MDC), and Intraclass Correlation Coefficient (ICC). Results The relative SEM was ≤10% for MT, I50, latency and SP both at rest and 5%MVC, while it ranged between 11% and 37% for AURC, MEPmax, and slope. MDC values were overall quite large; e.g., MT required a change of 12%MSO at rest and 10%MSO at 5%MVC to be considered a real change. Inter-sessions ICC were >0.6 for all measures but slope at rest and MEPmax and latency at 5%MVC. Conclusions Measures derived from SR curves acquired in <4 minutes are affected by similar measurement errors to those found with long-lasting protocols, suggesting that the rapid method is at least as reliable as the traditional methods. As specifically designed to include older adults, this study provides normative data for future studies involving older neurological patients (e.g. stroke survivors). PMID:28910370

  2. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  3. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  4. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models.

    PubMed

    Hoffmann, Sabine; Laurier, Dominique; Rage, Estelle; Guihenneuc, Chantal; Ancelet, Sophie

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies.

  5. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    PubMed Central

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  6. Including sheath effects in the interpretation of planar retarding potential analyzer's low-energy ion data

    NASA Astrophysics Data System (ADS)

    Fisher, L. E.; Lynch, K. A.; Fernandes, P. A.; Bekkeng, T. A.; Moen, J.; Zettergren, M.; Miceli, R. J.; Powell, S.; Lessard, M. R.; Horak, P.

    2016-04-01

    The interpretation of planar retarding potential analyzers (RPA) during ionospheric sounding rocket missions requires modeling the thick 3D plasma sheath. This paper overviews the theory of RPAs with an emphasis placed on the impact of the sheath on current-voltage (I-V) curves. It then describes the Petite Ion Probe (PIP) which has been designed to function in this difficult regime. The data analysis procedure for this instrument is discussed in detail. Data analysis begins by modeling the sheath with the Spacecraft Plasma Interaction System (SPIS), a particle-in-cell code. Test particles are traced through the sheath and detector to determine the detector's response. A training set is constructed from these simulated curves for a support vector regression analysis which relates the properties of the I-V curve to the properties of the plasma. The first in situ use of the PIPs occurred during the MICA sounding rocket mission which launched from Poker Flat, Alaska in February of 2012. These data are presented as a case study, providing valuable cross-instrument comparisons. A heritage top-hat thermal ion electrostatic analyzer, called the HT, and a multi-needle Langmuir probe have been used to validate both the PIPs and the data analysis method. Compared to the HT, the PIP ion temperature measurements agree with a root-mean-square error of 0.023 eV. These two instruments agree on the parallel-to-B plasma flow velocity with a root-mean-square error of 130 m/s. The PIP with its field of view aligned perpendicular-to-B provided a density measurement with an 11% error compared to the multi-needle Langmuir Probe. Higher error in the other PIP's density measurement is likely due to simplifications in the SPIS model geometry.

  7. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  8. A complete methodology towards accuracy and lot-to-lot robustness in on-product overlay metrology using flexible wavelength selection

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Kaustuve; den Boef, Arie; Noot, Marc; Adam, Omer; Grzela, Grzegorz; Fuchs, Andreas; Jak, Martin; Liao, Sax; Chang, Ken; Couraudon, Vincent; Su, Eason; Tzeng, Wilson; Wang, Cathy; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Wang, Y. C.; Cheng, Kevin; Ke, Chih-Ming; Terng, L. G.

    2017-03-01

    The optical coupling between gratings in diffraction-based overlay triggers a swing-curve1,6 like response of the target's signal contrast and overlay sensitivity through measurement wavelengths and polarizations. This means there are distinct measurement recipes (wavelength and polarization combinations) for a given target where signal contrast and overlay sensitivity are located at the optimal parts of the swing-curve that can provide accurate and robust measurements. Some of these optimal recipes can be the ideal choices of settings for production. The user has to stay away from the non-optimal recipe choices (that are located on the undesirable parts of the swing-curve) to avoid possibilities to make overlay measurement error that can be sometimes (depending on the amount of asymmetry and stack) in the order of several "nm". To accurately identify these optimum operating areas of the swing-curve during an experimental setup, one needs to have full-flexibility in wavelength and polarization choices. In this technical publication, a diffraction-based overlay (DBO) measurement tool with many choices of wavelengths and polarizations is utilized on advanced production stacks to study swing-curves. Results show that depending on the stack and the presence of asymmetry, the swing behavior can significantly vary and a solid procedure is needed to identify a recipe during setup that is robust against variations in stack and grating asymmetry. An approach is discussed on how to use this knowledge of swing-curve to identify recipe that is not only accurate at setup, but also robust over the wafer, and wafer-to-wafer. KPIs are reported in run-time to ensure the quality / accuracy of the reading (basically acting as an error bar to overlay measurement).

  9. Development of intuitive theories of motion - Curvilinear motion in the absence of external forces

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.; Mccloskey, M.; Proffitt, D. R.

    1986-01-01

    College students and children between the ages of 4 and 12 were asked to draw the path a ball would take upon exiting a curved tube. As in previous studies, many subjects erroneously predicted curvilinear paths. However, a clear U-shaped curve was evident in the data: Preschoolers and kindergartners performed as well as college students, whereas school-aged children were more likely to make erroneous predictions. A second study suggested that the youngest children's correct responses could not be attributed to response biases or drawing abilities. This developmental trend is interpreted to mean that the school-aged children are developing intuitive theories of motion that include erroneous principles. The results are related to the 'growth errors' found in other cognitive domains and to the historical development of formal theories of motion.

  10. Data-Driven Method to Estimate Nonlinear Chemical Equivalence.

    PubMed

    Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.

  11. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  12. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity.

    PubMed

    Liu, Feng; Walters, Stephen J; Julious, Steven A

    2017-10-02

    It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log linear shape curve under fixed dose allocation, no adaptive allocation, half adaptive and adaptive scenarios. The bias though is significantly increased for the Emax model if the true dose response follows a U-shaped curve. In most cases the Bayesian Emax model works effectively and efficiently, with low bias and good probability of success in case of monotonic dose response. However, if there is a belief that the dose response could be non-monotonic then the NDLM is the superior model to assess the dose response.

  13. IMRT QA: Selecting gamma criteria based on error detection sensitivity.

    PubMed

    Steers, Jennifer M; Fraass, Benedick A

    2016-04-01

    The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.

  14. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    NASA Astrophysics Data System (ADS)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  15. The precision of a special purpose analog computer in clinical cardiac output determination.

    PubMed Central

    Sullivan, F J; Mroz, E A; Miller, R E

    1975-01-01

    Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394

  16. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.

    Purpose: In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity inmore » scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was {+-}7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.« less

  18. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner.

    PubMed

    McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S

    2011-04-01

    In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.

  19. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  20. The learning curve of laparoscopic holecystectomy in general surgery resident training: old age of the patient may be a risk factor?

    PubMed

    Ferrarese, Alessia; Gentile, Valentina; Bindi, Marco; Rivelli, Matteo; Cumbo, Jacopo; Solej, Mario; Enrico, Stefano; Martino, Valter

    2016-01-01

    A well-designed learning curve is essential for the acquisition of laparoscopic skills: but, are there risk factors that can derail the surgical method? From a review of the current literature on the learning curve in laparoscopic surgery, we identified learning curve components in video laparoscopic cholecystectomy; we suggest a learning curve model that can be applied to assess the progress of general surgical residents as they learn and master the stages of video laparoscopic cholecystectomy regardless of type of patient. Electronic databases were interrogated to better define the terms "surgeon", "specialized surgeon", and "specialist surgeon"; we surveyed the literature on surgical residency programs outside Italy to identify learning curve components, influential factors, the importance of tutoring, and the role of reference centers in residency education in surgery. From the definition of acceptable error, self-efficacy, and error classification, we devised a learning curve model that may be applied to training surgical residents in video laparoscopic cholecystectomy. Based on the criteria culled from the literature, the three surgeon categories (general, specialized, and specialist) are distinguished by years of experience, case volume, and error rate; the patients were distinguished for years and characteristics. The training model was constructed as a series of key learning steps in video laparoscopic cholecystectomy. Potential errors were identified and the difficulty of each step was graded using operation-specific characteristics. On completion of each procedure, error checklist scores on procedure-specific performance are tallied to track the learning curve and obtain performance indices of measurement that chart the trainee's progress. The concept of the learning curve in general surgery is disputed. The use of learning steps may enable the resident surgical trainee to acquire video laparoscopic cholecystectomy skills proportional to the instructor's ability, the trainee's own skills, and the safety of the surgical environment. There were no patient characteristics that can derail the methods. With this training scheme, resident trainees may be provided the opportunity to develop their intrinsic capabilities without the loss of basic technical skills.

  1. Methodology for rheological testing of engineered biomaterials at low audio frequencies

    NASA Astrophysics Data System (ADS)

    Titze, Ingo R.; Klemuk, Sarah A.; Gray, Steven

    2004-01-01

    A commercial rheometer (Bohlin CVO120) was used to mechanically test materials that approximate vocal-fold tissues. Application is to frequencies in the low audio range (20-150 Hz). Because commercial rheometers are not specifically designed for this frequency range, a primary problem is maintaining accuracy up to (and beyond) the mechanical resonance frequency of the rotating shaft assembly. A standard viscoelastic material (NIST SRM 2490) has been used to calibrate the rheometric system for an expanded frequency range. Mathematically predicted response curves are compared to measured response curves, and an error analysis is conducted to determine the accuracy to which the elastic modulus and the shear modulus can be determined in the 20-150-Hz region. Results indicate that the inertia of the rotating assembly and the gap between the plates need to be known (or determined empirically) to a high precision when the measurement frequency exceeds the resonant frequency. In addition, a phase correction is needed to account for the magnetic inertia (inductance) of the drag cup motor. Uncorrected, the measured phase can go below the theoretical limit of -π. This can produce large errors in the viscous modulus near and above the resonance frequency. With appropriate inertia and phase corrections, +/-10% accuracy can be obtained up to twice the resonance frequency.

  2. Recognition errors suggest fast familiarity and slow recollection in rhesus monkeys

    PubMed Central

    Basile, Benjamin M.; Hampton, Robert R.

    2013-01-01

    One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses can demonstrate dual processes has been repeatedly challenged. Here, we present independent converging evidence for the dual-process model from analyses of recognition errors made by rhesus monkeys. Recognition choices were made in three different ways depending on processing duration. Short-latency errors were disproportionately false alarms to familiar lures, suggesting control by familiarity. Medium-latency responses were less likely to be false alarms and were more accurate, suggesting onset of a recollective process that could correctly reject familiar lures. Long-latency responses were guesses. A response deadline increased false alarms, suggesting that limiting processing time weakened the contribution of recollection and strengthened the contribution of familiarity. Together, these findings suggest fast familiarity and slow recollection in monkeys, that monkeys use a “recollect to reject” strategy to countermand false familiarity, and that primate recognition performance is well-characterized by a dual-process model consisting of recollection and familiarity. PMID:23864646

  3. Class-specific Error Bounds for Ensemble Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prenger, R; Lemmond, T; Varshney, K

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less

  4. conindex: Estimation of concentration indices

    PubMed Central

    O'Donnell, Owen; O'Neill, Stephen; Van Ourti, Tom; Walsh, Brendan

    2016-01-01

    Concentration indices are frequently used to measure inequality in one variable over the distribution of another. Most commonly, they are applied to the measurement of socioeconomic-related inequality in health. We introduce a user-written Stata command conindex which provides point estimates and standard errors of a range of concentration indices. The command also graphs concentration curves (and Lorenz curves) and performs statistical inference for the comparison of inequality between groups. The article offers an accessible introduction to the various concentration indices that have been proposed to suit different measurement scales and ethical responses to inequality. The command’s capabilities and syntax are demonstrated through analysis of wealth-related inequality in health and healthcare in Cambodia. PMID:27053927

  5. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less

  6. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.

  7. Response analysis of holography-based modal wavefront sensor.

    PubMed

    Dong, Shihao; Haist, Tobias; Osten, Wolfgang; Ruppel, Thomas; Sawodny, Oliver

    2012-03-20

    The crosstalk problem of holography-based modal wavefront sensing (HMWS) becomes more severe with increasing aberration. In this paper, crosstalk effects on the sensor response are analyzed statistically for typical aberrations due to atmospheric turbulence. For specific turbulence strength, we optimized the sensor by adjusting the detector radius and the encoded phase bias for each Zernike mode. Calibrated response curves of low-order Zernike modes were further utilized to improve the sensor accuracy. The simulation results validated our strategy. The number of iterations for obtaining a residual RMS wavefront error of 0.1λ is reduced from 18 to 3. © 2012 Optical Society of America

  8. On the cost of approximating and recognizing a noise perturbed straight line or a quadratic curve segment in the plane. [central processing units

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.; Yalabik, N.

    1975-01-01

    Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.

  9. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  10. [Difference of three standard curves of real-time reverse-transcriptase PCR in viable Vibrio parahaemolyticus quantification].

    PubMed

    Jin, Mengtong; Sun, Wenshuo; Li, Qin; Sun, Xiaohong; Pan, Yingjie; Zhao, Yong

    2014-04-04

    We evaluated the difference of three standard curves in quantifying viable Vibrio parahaemolyticus in samples by real-time reverse-transcriptase PCR (Real-time RT-PCR). The standard curve A was established by 10-fold diluted cDNA. The cDNA was reverse transcripted after RNA synthesized in vitro. The standard curve B and C were established by 10-fold diluted cDNA. The cDNA was synthesized after RNA isolated from Vibrio parahaemolyticus in pure cultures (10(8) CFU/mL) and shrimp samples (10(6) CFU/g) (Standard curve A and C were proposed for the first time). Three standard curves were performed to quantitatively detect V. parahaemolyticus in six samples, respectively (Two pure cultured V. parahaemolyticus samples, two artificially contaminated cooked Litopenaeus vannamei samples and two artificially contaminated Litopenaeus vannamei samples). Then we evaluated the quantitative results of standard curve and the plate counting results and then analysed the differences. The three standard curves all show a strong linear relationship between the fractional cycle number and V. parahaemolyticus concentration (R2 > 0.99); The quantitative results of Real-time PCR were significantly (p < 0.05) lower than the results of plate counting. The relative errors compared with the results of plate counting ranked standard curve A (30.0%) > standard curve C (18.8%) > standard curve B (6.9%); The average differences between standard curve A and standard curve B and C were - 2.25 Lg CFU/mL and - 0.75 Lg CFU/mL, respectively, and the mean relative errors were 48.2% and 15.9%, respectively; The average difference between standard curve B and C was among (1.47 -1.53) Lg CFU/mL and the average relative errors were among 19.0% - 23.8%. Standard curve B could be applied to Real-time RT-PCR when quantify the number of viable microorganisms in samples.

  11. In vivo proton dosimetry using a MOSFET detector in an anthropomorphic phantom with tissue inhomogeneity.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko

    2012-03-08

    When in vivo proton dosimetry is performed with a metal-oxide semiconductor field-effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth-output curve for a mono-energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMC(MOSFET)). The SMC(MOSFET) output value at an arbitrary point was compared with the value obtained by the conventional SMC(PPIC), which calculates proton dose distributions by using the depth-dose curve determined by a parallel-plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMC(PPIC) results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector.

  12. Strain actuated aeroelastic control

    NASA Technical Reports Server (NTRS)

    Lazarus, Kenneth B.

    1992-01-01

    Viewgraphs on strain actuated aeroelastic control are presented. Topics covered include: structural and aerodynamic modeling; control law design methodology; system block diagram; adaptive wing test article; bench-top experiments; bench-top disturbance rejection: open and closed loop response; bench-top disturbance rejection: state cost versus control cost; wind tunnel experiments; wind tunnel gust alleviation: open and closed loop response at 60 mph; wind tunnel gust alleviation: state cost versus control cost at 60 mph; wind tunnel command following: open and closed loop error at 60 mph; wind tunnel flutter suppression: open loop flutter speed; and wind tunnel flutter suppression: closed loop state cost curves.

  13. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  14. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  15. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  16. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    DOEpatents

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  17. LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration

    NASA Astrophysics Data System (ADS)

    Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng

    2016-12-01

    The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.

  18. Quantitative analysis of essential oils in perfume using multivariate curve resolution combined with comprehensive two-dimensional gas chromatography.

    PubMed

    de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio

    2011-08-05

    The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  20. Language of Mechanisms: Exam Analysis Reveals Students' Strengths, Strategies, and Errors When Using the Electron-Pushing Formalism (Curved Arrows) in New Reactions

    ERIC Educational Resources Information Center

    Flynn, Alison B.; Featherstone, Ryan B.

    2017-01-01

    This study investigated students' successes, strategies, and common errors in their answers to questions that involved the electron-pushing (curved arrow) formalism (EPF), part of organic chemistry's language. We analyzed students' answers to two question types on midterms and final exams: (1) draw the electron-pushing arrows of a reaction step,…

  1. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  2. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner

    PubMed Central

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.

    2011-01-01

    Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases. PMID:21626925

  3. Better P-curves: Making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a Reply to Ulrich and Miller (2015).

    PubMed

    Simonsohn, Uri; Simmons, Joseph P; Nelson, Leif D

    2015-12-01

    When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant p value (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p < .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness. (c) 2015 APA, all rights reserved).

  4. Titration Curves: Fact and Fiction.

    ERIC Educational Resources Information Center

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  5. Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code

    NASA Technical Reports Server (NTRS)

    Gonzales, Fabian O.; Velez-Reyes, Miguel

    1997-01-01

    When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,

  6. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    PubMed

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  7. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  8. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  9. The ipRGC-Driven Pupil Response with Light Exposure, Refractive Error, and Sleep.

    PubMed

    Abbott, Kaleb S; Queener, Hope M; Ostrin, Lisa A

    2018-04-01

    We investigated links between the intrinsically photosensitive retinal ganglion cells, light exposure, refractive error, and sleep. Results showed that morning melatonin was associated with light exposure, with modest differences in sleep quality between myopes and emmetropes. Findings suggest a complex relationship between light exposure and these physiological processes. Intrinsically photosensitive retinal ganglion cells (ipRGCs) signal environmental light, with pathways to the midbrain to control pupil size and circadian rhythm. Evidence suggests that light exposure plays a role in refractive error development. Our goal was to investigate links between light exposure, ipRGCs, refractive error, and sleep. Fifty subjects, aged 17-40, participated (19 emmetropes and 31 myopes). A subset of subjects (n = 24) wore an Actiwatch Spectrum for 1 week. The Pittsburgh Sleep Quality Index (PSQI) was administered, and saliva samples were collected for melatonin analysis. The post-illumination pupil response (PIPR) to 1 s and 5 s long- and short-wavelength stimuli was measured. Pupil metrics included the 6 s and 30 s PIPR and early and late area under the curve. Subjects spent 104.8 ± 46.6 min outdoors per day over the previous week. Morning melatonin concentration (6.9 ± 3.5 pg/ml) was significantly associated with time outdoors and objectively measured light exposure (P = .01 and .002, respectively). Pupil metrics were not significantly associated with light exposure or refractive error. PSQI scores indicated good sleep quality for emmetropes (score 4.2 ± 2.3) and poor sleep quality for myopes (5.6 ± 2.2, P = .04). We found that light exposure and time outdoors influenced morning melatonin concentration. No differences in melatonin or the ipRGC-driven pupil response were observed between refractive error groups, although myopes exhibited poor sleep quality compared to emmetropes. Findings suggest that a complex relationship between light exposure, ipRGCs, refractive error, and sleep exists.

  10. Clinical and Radiographic Evaluation of Procedural Errors during Preparation of Curved Root Canals with Hand and Rotary Instruments: A Randomized Clinical Study.

    PubMed

    Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim

    2017-01-01

    The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex.

  11. Clinical and Radiographic Evaluation of Procedural Errors during Preparation of Curved Root Canals with Hand and Rotary Instruments: A Randomized Clinical Study

    PubMed Central

    Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim

    2017-01-01

    Background: The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. Aim: The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Materials and Methods: Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Statistical Analysis: Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). Results: In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Conclusion: Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex. PMID:29042727

  12. A second-order 3D electromagnetics algorithm for curved interfaces between anisotropic dielectrics on a Yee mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.

    A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less

  13. Volume Phase Masks in Photo-Thermo-Refractive Glass

    DTIC Science & Technology

    2014-10-06

    development when forming the nanocrystals. Fig. 1.1 shows the refractive index change curves for some common glass melts when exposed to a beam at 325 nm...integral curve to the curve for the ideal phase mask. If there is a deviation in the experimental curve from the ideal curve , whether the overlap...redevelopments of the sample. Note that the third point on the spherical curve and the third and fourth points on the coma y curve have larger error bars than

  14. New microscale constitutive model of human trabecular bone based on depth sensing indentation technique.

    PubMed

    Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty

    2018-05-30

    A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. In vivo proton dosimetry using a MOSFET detector in an anthropomorphic phantom with tissue inhomogeneity

    PubMed Central

    Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko

    2012-01-01

    When in vivo proton dosimetry is performed with a metal‐oxide semiconductor field‐effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth‐output curve for a mono‐energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMCMOSFET). The SMCMOSFET output value at an arbitrary point was compared with the value obtained by the conventional SMCPPIC, which calculates proton dose distributions by using the depth‐dose curve determined by a parallel‐plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMCPPIC results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector. PACS number: 87.56.‐v PMID:22402385

  16. Responsiveness of performance-based outcome measures for mobility, balance, muscle strength and manual dexterity in adults with myotonic dystrophy type 1.

    PubMed

    Kierkegaard, Marie; Petitclerc, Émilie; Hébert, Luc J; Mathieu, Jean; Gagnon, Cynthia

    2018-02-28

    To assess changes and responsiveness in outcome measures of mobility, balance, muscle strength and manual dexterity in adults with myotonic dystrophy type 1. A 9-year longitudinal study conducted with 113 patients. The responsiveness of the Timed Up and Go test, Berg Balance Scale, quantitative muscle testing, grip and pinch-grip strength, and Purdue Pegboard Test was assessed using criterion and construct approaches. Patient-reported perceived changes (worse/stable) in balance, walking, lower-limb weakness, stair-climbing and hand weakness were used as criteria. Predefined hypotheses about expected area under the receiver operating characteristic curves (criterion approach) and correlations between relative changes (construct approach) were explored. The direction and magnitude of median changes in outcome measures corresponded with patient-reported changes. Median changes in the Timed Up and Go test, grip strength, pinch-grip strength and Purdue Pegboard Test did not, in general, exceed known measurement errors. Most criterion (72%) and construct (70%) approach hypotheses were supported. Promising responsiveness was found for outcome measures of mobility, balance and muscle strength. Grip strength and manual dexterity measures showed poorer responsiveness. The performance-based outcome measures captured changes over the 9-year period and responsiveness was promising. Knowledge of measurement errors is needed to interpret the meaning of these longitudinal changes.

  17. A conceptual design study of point focusing thin-film solar concentrators

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Candidates for reflector panel design concepts, including materials and configurations, were identified. The large list of candidates was screened and reduced to the five most promising ones. Cost and technical factors were used in making the final choices for the panel conceptual design, which was a stiffened steel skin substrate with a bonded, acrylic overcoated, aluminized polyester film reflective surface. Computer simulations were run for the concentrator optics using the selected panel design, and experimentally determined specularity and reflectivity values. Intercept factor curves and energy to the aperture curves were produced. These curves indicate that surface errors of 2 mrad (milliradians) or less would be required to capture the desired energy for a Brayton cycle 816 C case. Two test panels were fabricated to demonstrate manufacturability and optically tested for surface error. Surface errors in the range of 1.75 mrad and 2.2 mrad were measured.

  18. Creep and Creep Recovery Response of Load Cells Tested According to U.S. and International Evaluation Procedures

    PubMed Central

    Bartel, Thomas W.; Yaniv, Simone L.

    1997-01-01

    The 60 min creep data from National Type Evaluation Procedure (NTEP) tests performed at the National Institute of Standards and Technology (NIST) on 65 load cells have been analyzed in order to compare their creep and creep recovery responses, and to compare the 60 min creep with creep over shorter time periods. To facilitate this comparison the data were fitted to a multiple-term exponential equation, which adequately describes the creep and creep recovery responses of load cells. The use of such a curve fit reduces the effect of the random error in the indicator readings on the calculated values of the load cell creep. Examination of the fitted curves show that the creep recovery responses, after inversion by a change in sign, are generally similar in shape to the creep response, but smaller in magnitude. The average ratio of the absolute value of the maximum creep recovery to the maximum creep is 0.86; however, no reliable correlation between creep and creep recovery can be drawn from the data. The fitted curves were also used to compare the 60 min creep of the NTEP analysis with the 30 min creep and other parameters calculated according to the Organization Internationale de Métrologie Légale (OIML) R 60 analysis. The average ratio of the 30 min creep value to the 60 min value is 0.84. The OIML class C creep tolerance is less than 0.5 of the NTEP tolerance for classes III and III L. PMID:27805151

  19. An assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue, Ireland

    NASA Astrophysics Data System (ADS)

    Harrington, Seán T.; Harrington, Joseph R.

    2013-03-01

    This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.

  20. Response of TLD-100 in mixed fields of photons and electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawless, Michael J.; Junell, Stephanie; Hammer, Cliff

    Purpose: Thermoluminescent dosimeters (TLDs) are routinely used for dosimetric measurements of high energy photon and electron fields. However, TLD response in combined fields of photon and electron beam qualities has not been characterized. This work investigates the response of TLD-100 (LiF:Mg,Ti) to sequential irradiation by high-energy photon and electron beam qualities. Methods: TLDs were irradiated to a known dose by a linear accelerator with a 6 MV photon beam, a 6 MeV electron beam, and a NIST-traceable {sup 60}Co beam. TLDs were also irradiated in a mixed field of the 6 MeV electron beam and the 6 MV photon beam.more » The average TLD response per unit dose of the TLDs for each linac beam quality was normalized to the average response per unit dose of the TLDs irradiated by the {sup 60}Co beam. Irradiations were performed in water and in a Virtual Water Trade-Mark-Sign phantom. The 6 MV photon beam and 6 MeV electron beam were used to create dose calibration curves relating TLD response to absorbed dose to water, which were applied to the TLDs irradiated in the mixed field. Results: TLD relative response per unit dose in the mixed field was less sensitive than the relative response in the photon field and more sensitive than the relative response in the electron field. Application of the photon dose calibration curve to the TLDs irradiated in a mixed field resulted in an underestimation of the delivered dose, while application of the electron dose calibration curve resulted in an overestimation of the dose. Conclusions: The relative response of TLD-100 in mixed fields fell between the relative response in the photon-only and electron-only fields. TLD-100 dosimetry of mixed fields must account for this intermediate response to minimize the estimation errors associated with calibration factors obtained from a single beam quality.« less

  1. Response of TLD-100 in mixed fields of photons and electrons.

    PubMed

    Lawless, Michael J; Junell, Stephanie; Hammer, Cliff; DeWerd, Larry A

    2013-01-01

    Thermoluminescent dosimeters (TLDs) are routinely used for dosimetric measurements of high energy photon and electron fields. However, TLD response in combined fields of photon and electron beam qualities has not been characterized. This work investigates the response of TLD-100 (LiF:Mg,Ti) to sequential irradiation by high-energy photon and electron beam qualities. TLDs were irradiated to a known dose by a linear accelerator with a 6 MV photon beam, a 6 MeV electron beam, and a NIST-traceable (60)Co beam. TLDs were also irradiated in a mixed field of the 6 MeV electron beam and the 6 MV photon beam. The average TLD response per unit dose of the TLDs for each linac beam quality was normalized to the average response per unit dose of the TLDs irradiated by the (60)Co beam. Irradiations were performed in water and in a Virtual Water™ phantom. The 6 MV photon beam and 6 MeV electron beam were used to create dose calibration curves relating TLD response to absorbed dose to water, which were applied to the TLDs irradiated in the mixed field. TLD relative response per unit dose in the mixed field was less sensitive than the relative response in the photon field and more sensitive than the relative response in the electron field. Application of the photon dose calibration curve to the TLDs irradiated in a mixed field resulted in an underestimation of the delivered dose, while application of the electron dose calibration curve resulted in an overestimation of the dose. The relative response of TLD-100 in mixed fields fell between the relative response in the photon-only and electron-only fields. TLD-100 dosimetry of mixed fields must account for this intermediate response to minimize the estimation errors associated with calibration factors obtained from a single beam quality.

  2. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach is more efficient than the determinist one. By using for the parameters prior credible intervals defined by the user, this method provides an estimate of best rating curve estimate without any unlikely parameter, and all sites achieved convergence before reaching the maximum number of model evaluations. Results were assessed trough the Nash Sutcliffe efficiency coefficient, applied both to discharge and logarithm of discharges. Most of the virtual stations had good or very good results, showing values of Ens going from 0.7 to 0.98. However, worse results were found at a few virtual stations, unveiling the necessity of investigating possibilities of segmentation of the rating curve, depending on the stage or the rising or recession limb, but also possible errors in the altimetry series.

  3. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  4. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  5. Derivation of error sources for experimentally derived heliostat shapes

    NASA Astrophysics Data System (ADS)

    Cumpston, Jeff; Coventry, Joe

    2017-06-01

    Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.

  6. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  7. Reduction of shading-derived artifacts in skin chromophore imaging without measurements or assumptions about the shape of the subject

    NASA Astrophysics Data System (ADS)

    Yoshida, Kenichiro; Nishidate, Izumi; Ojima, Nobutoshi; Iwata, Kayoko

    2014-01-01

    To quantitatively evaluate skin chromophores over a wide region of curved skin surface, we propose an approach that suppresses the effect of the shading-derived error in the reflectance on the estimation of chromophore concentrations, without sacrificing the accuracy of that estimation. In our method, we use multiple regression analysis, assuming the absorbance spectrum as the response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as the predictor variables. The concentrations of melanin and total hemoglobin are determined from the multiple regression coefficients using compensation formulae (CF) based on the diffuse reflectance spectra derived from a Monte Carlo simulation. To suppress the shading-derived error, we investigated three different combinations of multiple regression coefficients for the CF. In vivo measurements with the forearm skin demonstrated that the proposed approach can reduce the estimation errors that are due to shading-derived errors in the reflectance. With the best combination of multiple regression coefficients, we estimated that the ratio of the error to the chromophore concentrations is about 10%. The proposed method does not require any measurements or assumptions about the shape of the subjects; this is an advantage over other studies related to the reduction of shading-derived errors.

  8. Förster resonance energy transfer (FRET)-based picosecond lifetime reference for instrument response evaluation

    NASA Astrophysics Data System (ADS)

    Luchowski, R.; Kapusta, P.; Szabelski, M.; Sarkar, P.; Borejdo, J.; Gryczynski, Z.; Gryczynski, I.

    2009-09-01

    Förster resonance energy transfer (FRET) can be utilized to achieve ultrashort fluorescence responses in time-domain fluorometry. In a poly(vinyl) alcohol matrix, the presence of 60 mM Rhodamine 800 acceptor shortens the fluorescence lifetime of a pyridine 1 donor to about 20 ps. Such a fast fluorescence response is very similar to the instrument response function (IRF) obtained using scattered excitation light. A solid fluorescent sample (e.g a film) with picosecond lifetime is ideal for IRF measurements and particularly useful for time-resolved microscopy. Avalanche photodiode detectors, commonly used in this field, feature color- dependent-timing responses. We demonstrate that recording the fluorescence decay of the proposed FRET-based reference sample yields a better IRF approximation than the conventional light-scattering method and therefore avoids systematic errors in decay curve analysis.

  9. The e-MSWS-12: improving the multiple sclerosis walking scale using item response theory.

    PubMed

    Engelhard, Matthew M; Schmidt, Karen M; Engel, Casey E; Brenton, J Nicholas; Patek, Stephen D; Goldman, Myla D

    2016-12-01

    The Multiple Sclerosis Walking Scale (MSWS-12) is the predominant patient-reported measure of multiple sclerosis (MS) -elated walking ability, yet it had not been analyzed using item response theory (IRT), the emerging standard for patient-reported outcome (PRO) validation. This study aims to reduce MSWS-12 measurement error and facilitate computerized adaptive testing by creating an IRT model of the MSWS-12 and distributing it online. MSWS-12 responses from 284 subjects with MS were collected by mail and used to fit and compare several IRT models. Following model selection and assessment, subpopulations based on age and sex were tested for differential item functioning (DIF). Model comparison favored a one-dimensional graded response model (GRM). This model met fit criteria and explained 87 % of response variance. The performance of each MSWS-12 item was characterized using category response curves (CRCs) and item information. IRT-based MSWS-12 scores correlated with traditional MSWS-12 scores (r = 0.99) and timed 25-foot walk (T25FW) speed (r =  -0.70). Item 2 showed DIF based on age (χ 2  = 19.02, df = 5, p < 0.01), and Item 11 showed DIF based on sex (χ 2  = 13.76, df = 5, p = 0.02). MSWS-12 measurement error depends on walking ability, but could be lowered by improving or replacing items with low information or DIF. The e-MSWS-12 includes IRT-based scoring, error checking, and an estimated T25FW derived from MSWS-12 responses. It is available at https://ms-irt.shinyapps.io/e-MSWS-12 .

  10. Image stretching on a curved surface to improve satellite gridding

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.

    1975-01-01

    A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.

  11. Comparison of the learning curves of digital examination and transabdominal sonography for the determination of fetal head position during labor.

    PubMed

    Rozenberg, P; Porcher, R; Salomon, L J; Boirot, F; Morin, C; Ville, Y

    2008-03-01

    To evaluate the learning curve of transabdominal sonography for the determination of fetal head position in labor and to compare it with that of digital vaginal examination. A student midwife who had never performed digital vaginal examination or ultrasound examination was recruited for this study. Instructions on how to perform digital vaginal examination and ultrasound examination were given before and after completing the first vaginal and ultrasound examinations, and repeated for each subsequent examination for as long as necessary. Digital and ultrasound diagnoses of the fetal head position were always performed first by the student midwife, and repeated by an experienced midwife or physician. The learning curve for identification of the fetal head position by either one of the two methods was analyzed using the cumulative sums (CUSUM) method for measurement errors. One hundred patients underwent digital vaginal examination and 99 had transabdominal sonography for the determination of fetal head position. An error rate of around 50% for vaginal examination was nearly constant during the first 50 examinations. It decreased subsequently, to stabilize at a low level from the 82(nd) patient. Errors of +/- 180 degrees were the most frequent. The learning curve for ultrasound imaging stabilized earlier than that of vaginal examination, after the 32(nd) patient. The most frequent errors with ultrasound examination were the inability to conclude on a diagnosis, particularly at the beginning of training, followed by errors of +/- 45 degrees. Based on our findings for the student tested, learning and accuracy of the determination of fetal head position in labor were easier and higher, respectively, with transabdominal sonography than with digital examination. This should encourage physicians to introduce clinical ultrasound examination into their practice. CUSUM charts provide a reliable representation of the learning curve, by accumulating evidence of performance. Copyright (c) 2008 ISUOG. Published by John Wiley & Sons, Ltd.

  12. It Pays to Go Off-Track: Practicing with Error-Augmenting Haptic Feedback Facilitates Learning of a Curve-Tracing Task

    PubMed Central

    Williams, Camille K.; Tremblay, Luc; Carnahan, Heather

    2016-01-01

    Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937

  13. Curve fitting methods for solar radiation data modeling

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  14. A compact presentation of DSN array telemetry performance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1982-01-01

    The telemetry performance of an arrayed receiver system, including radio losses, is often given by a family of curves giving bit error rate vs bit SNR, with tracking loop SNR at one receiver held constant along each curve. This study shows how to process this information into a more compact, useful format in which the minimal total signal power and optimal carrier suppression, for a given fixed bit error rate, are plotted vs data rate. Examples for baseband-only combining are given. When appropriate dimensionless variables are used for plotting, receiver arrays with different numbers of antennas and different threshold tracking loop bandwidths look much alike, and a universal curve for optimal carrier suppression emerges.

  15. Curve fitting methods for solar radiation data modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less

  16. Tirilazad mesylate protects stored erythrocytes against osmotic fragility.

    PubMed

    Epps, D E; Knechtel, T J; Bacznskyj, O; Decker, D; Guido, D M; Buxser, S E; Mathews, W R; Buffenbarger, S L; Lutzke, B S; McCall, J M

    1994-12-01

    The hypoosmotic lysis curve of freshly collected human erythrocytes is consistent with a single Gaussian error function with a mean of 46.5 +/- 0.25 mM NaCl and a standard deviation of 5.0 +/- 0.4 mM NaCl. After extended storage of RBCs under standard blood bank conditions the lysis curve conforms to the sum of two error functions instead of a possible shift in the mean and a broadening of a single error function. Thus, two distinct sub-populations with different fragilities are present instead of a single, broadly distributed population. One population is identical to the freshly collected erythrocytes, whereas the other population consists of osmotically fragile cells. The rate of generation of the new, osmotically fragile, population of cells was used to probe the hypothesis that lipid peroxidation is responsible for the induction of membrane fragility. If it is so, then the antioxidant, tirilazad mesylate (U-74,006f), should protect against this degradation of stored erythrocytes. We found that tirilazad mesylate, at 17 microM (1.5 mol% with respect to membrane lecithin), retards significantly the formation of the osmotically fragile RBCs. Concomitantly, the concentration of free hemoglobin which accumulates during storage is markedly reduced by the drug. Since the presence of the drug also decreases the amount of F2-isoprostanes formed during the storage period, an antioxidant mechanism must be operative. These results demonstrate that tirilazad mesylate significantly decreases the number of fragile erythrocytes formed during storage in the blood bank.

  17. FBEYE: Analyzing Kepler light curves and validating flares

    NASA Astrophysics Data System (ADS)

    Johnson, Emily; Davenport, James R. A.; Hawley, Suzanne L.

    2017-12-01

    FBEYE, the "Flares By-Eye" detection suite, is written in IDL and analyzes Kepler light curves and validates flares. It works on any 3-column light curve that contains time, flux, and error. The success of flare identification is highly dependent on the smoothing routine, which may not be suitable for all sources.

  18. Comparative study of some robust statistical methods: weighted, parametric, and nonparametric linear regression of HPLC convoluted peak responses using internal standard method in drug bioavailability studies.

    PubMed

    Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Ragab, Marwa A A

    2013-05-01

    This manuscript discusses the application and the comparison between three statistical regression methods for handling data: parametric, nonparametric, and weighted regression (WR). These data were obtained from different chemometric methods applied to the high-performance liquid chromatography response data using the internal standard method. This was performed on a model drug Acyclovir which was analyzed in human plasma with the use of ganciclovir as internal standard. In vivo study was also performed. Derivative treatment of chromatographic response ratio data was followed by convolution of the resulting derivative curves using 8-points sin x i polynomials (discrete Fourier functions). This work studies and also compares the application of WR method and Theil's method, a nonparametric regression (NPR) method with the least squares parametric regression (LSPR) method, which is considered the de facto standard method used for regression. When the assumption of homoscedasticity is not met for analytical data, a simple and effective way to counteract the great influence of the high concentrations on the fitted regression line is to use WR method. WR was found to be superior to the method of LSPR as the former assumes that the y-direction error in the calibration curve will increase as x increases. Theil's NPR method was also found to be superior to the method of LSPR as the former assumes that errors could occur in both x- and y-directions and that might not be normally distributed. Most of the results showed a significant improvement in the precision and accuracy on applying WR and NPR methods relative to LSPR.

  19. Effect of Slice Error of Glass on Zero Offset of Capacitive Accelerometer

    NASA Astrophysics Data System (ADS)

    Hao, R.; Yu, H. J.; Zhou, W.; Peng, B.; Guo, J.

    2018-03-01

    Packaging process had been studied on capacitance accelerometer. The silicon-glass bonding process had been adopted on sensor chip and glass, and sensor chip and glass was adhered on ceramic substrate, the three-layer structure was curved due to the thermal mismatch, the slice error of glass lead to asymmetrical curve of sensor chip. Thus, the sensitive mass of accelerometer deviated along the sensitive direction, which was caused in zero offset drift. It was meaningful to confirm the influence of slice error of glass, the simulation results showed that the zero output drift was 12.3×10-3 m/s2 when the deviation was 40μm.

  20. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  1. Scoring Methods in the International Land Benchmarking (ILAMB) Package

    NASA Astrophysics Data System (ADS)

    Collier, N.; Hoffman, F. M.; Keppel-Aleks, G.; Lawrence, D. M.; Mu, M.; Riley, W. J.; Randerson, J. T.

    2017-12-01

    The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of the land component of Earth system models. This effort is disseminated in the form of a python package which is openly developed (https://bitbucket.org/ncollier/ilamb). ILAMB is more than a workflow system that automates the generation of common scalars and plot comparisons to observational data. We aim to provide scientists and model developers with a tool to gain insight into model behavior. Thus, a salient feature of the ILAMB package is our synthesis methodology, which provides users with a high-level understanding of model performance. Within ILAMB, we calculate a non-dimensional score of a model's performance in a given dimension of the physics, chemistry, or biology with respect to an observational dataset. For example, we compare the Fluxnet-MTE Gross Primary Productivity (GPP) product against model output in the corresponding historical period. We compute common statistics such as the bias, root mean squared error, phase shift, and spatial distribution. We take these measures and find relative errors by normalizing the values, and then use the exponential to map this relative error to the unit interval. This allows for the scores to be combined into an overall score representing multiple aspects of model performance. In this presentation we give details of this process as well as a proposal for tuning the exponential mapping to make scores more cross comparable. However, as many models are calibrated using these scalar measures with respect to observational datasets, we also score the relationships among relevant variables in the model. For example, in the case of GPP, we also consider its relationship to precipitation, evapotranspiration, and temperature. We do this by creating a mean response curve and a two-dimensional distribution based on the observational data and model results. The response curves are then scored using a relative measure of the root mean squared error and the exponential as before. The distributions are scored using the so-called Hellinger distance, a statistical measure for how well one distribution is represented by another, and included in the model's overall score.

  2. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  3. Research on the measurement of the ultraviolet irradiance in the xenon lamp aging test chamber

    NASA Astrophysics Data System (ADS)

    Ji, Muyao; Li, Tiecheng; Lin, Fangsheng; Yin, Dejin; Cheng, Weihai; Huang, Biyong; Lai, Lei; Xia, Ming

    2018-01-01

    This paper briefly introduces the methods of calibrating the irradiance in the Xenon lamp aging test chamber. And the irradiance under ultraviolet region is mainly researched. Three different detectors whose response wave range are respectively UVA (320 400nm), UVB (275 330nm) and UVA+B (280 400nm) are used in the experiment. Through comparing the measuring results with different detectors under the same xenon lamp source, we discuss the difference between UVA, UVB and UVA+B on the basis of the spectrum of the xenon lamp and the response curve of the detectors. We also point out the possible error source, when use these detectors to calibrate the chamber.

  4. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.

  5. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  6. Atlas of the Light Curves and Phase Plane Portraits of Selected Long-Period Variables

    NASA Astrophysics Data System (ADS)

    Kudashkina, L. S.; Andronov, I. L.

    2017-12-01

    For a group of the Mira-type stars, semi-regular variables and some RV Tau - type stars the limit cycles were computed and plotted using the phase plane diagrams. As generalized coordinates x and x', we have used φ - the brightness of the star and its phase derivative. We have used mean phase light curves using observations of various authors from the databases of AAVSO, AFOEV, VSOLJ, ASAS and approximated using a trigonometric polynomial of statistically optimal degree. For a simple sine-like light curve, the limit cycle is a simple ellipse. In a case of more complicated light curve, in which harmonics are statistically significant, the limit cycle has deviations from the ellipse. In an addition to a classical analysis, we use the error estimates of the smoothing function and its derivative to constrain an "error corridor" in the phase plane.

  7. Development of a time-stepping sediment budget model for assessing land use impacts in large river basins.

    PubMed

    Wilkinson, S N; Dougall, C; Kinsey-Henderson, A E; Searle, R D; Ellis, R J; Bartley, R

    2014-01-15

    The use of river basin modelling to guide mitigation of non-point source pollution of wetlands, estuaries and coastal waters has become widespread. To assess and simulate the impacts of alternate land use or climate scenarios on river washload requires modelling techniques that represent sediment sources and transport at the time scales of system response. Building on the mean-annual SedNet model, we propose a new D-SedNet model which constructs daily budgets of fine sediment sources, transport and deposition for each link in a river network. Erosion rates (hillslope, gully and streambank erosion) and fine sediment sinks (floodplains and reservoirs) are disaggregated from mean annual rates based on daily rainfall and runoff. The model is evaluated in the Burdekin basin in tropical Australia, where policy targets have been set for reducing sediment and nutrient loads to the Great Barrier Reef (GBR) lagoon from grazing and cropping land. D-SedNet predicted annual loads with similar performance to that of a sediment rating curve calibrated to monitored suspended sediment concentrations. Relative to a 22-year reference load time series at the basin outlet derived from a dynamic general additive model based on monitoring data, D-SedNet had a median absolute error of 68% compared with 112% for the rating curve. RMS error was slightly higher for D-SedNet than for the rating curve due to large relative errors on small loads in several drought years. This accuracy is similar to existing agricultural system models used in arable or humid environments. Predicted river loads were sensitive to ground vegetation cover. We conclude that the river network sediment budget model provides some capacity for predicting load time-series independent of monitoring data in ungauged basins, and for evaluating the impact of land management on river sediment load time-series, which is challenging across large regions in data-poor environments. © 2013. Published by Elsevier B.V. All rights reserved.

  8. The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates

    ERIC Educational Resources Information Center

    Sivo, Stephen; Fan, Xitao; Witta, Lea

    2005-01-01

    The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…

  9. Effects of tooth profile modification on dynamic responses of a high speed gear-rotor-bearing system

    NASA Astrophysics Data System (ADS)

    Hu, Zehua; Tang, Jinyuan; Zhong, Jue; Chen, Siyu; Yan, Haiyan

    2016-08-01

    A finite element node dynamic model of a high speed gear-rotor-bearing system considering the time-varying mesh stiffness, backlash, gyroscopic effect and transmission error excitation is developed. Different tooth profile modifications are introduced into the gear pair and corresponding time-varying mesh stiffness curves are obtained. Effects of the tooth profile modification on mesh stiffness are analyzed, and the natural frequencies and mode shapes of the gear-rotor-bearing transmission system are given. The dynamic responses with respect to a wide input speed region including dynamic factor, vibration amplitude near the bearing and dynamic transmission error are obtained by introducing the time-varying mesh stiffness in different tooth profile modification cases into the gear-rotor-bearing dynamic system. Effects of the tooth profile modification on the dynamic responses are studied in detail. The numerical simulation results show that both the short profile modification and the long profile modification can affect the mutation of the mesh stiffness when the number of engaging tooth pairs changes. A short profile modification with an appropriate modification amount can improve the dynamic property of the system in certain work condition.

  10. Response effects in the perception of conjunctions of colour and form.

    PubMed

    Chmiel, N

    1989-01-01

    Two experiments addressed the question whether visual search for a target defined by a conjunction of colour and form requires a central, serial, attentional process, but detection of a single feature, such as colour, is preattentive, as proposed by the feature-integration theory of attention. Experiment 1 investigated conjunction and feature search using small array sizes of up to five elements, under conditions which precluded eye-movements, in contrast to previous studies. The results were consistent with the theory. Conjunction search showed the effect of adding distractors to the display, the slopes of the curves relating RT to array size were in the approximate ratio of 2:1, consistent with a central, serial search process, exhaustive for absence responses and self-terminating for presence responses. Feature search showed no significant effect of distractors for presence responses. Experiment 2 manipulated the response requirements in conjunction search, using vocal response in a GO-NO GO procedure, in contrast to Experiment 1, which used key-press responses in a YES-NO procedure. Strikingly, presence-response RT was not affected significantly by the number of distractors in the array. The slope relating RT to array size was 3.92. The absence RT slope was 30.56, producing a slope ratio of approximately 8:1. There was no interaction of errors with array size and the presence and absence conditions, implying that RT-error trade-offs did not produce this slope ratio. This result suggests that feature-integration theory is at least incomplete.

  11. Brunn: an open source laboratory information system for microplates with a graphical plate layout design process.

    PubMed

    Alvarsson, Jonathan; Andersson, Claes; Spjuth, Ola; Larsson, Rolf; Wikberg, Jarl E S

    2011-05-20

    Compound profiling and drug screening generates large amounts of data and is generally based on microplate assays. Current information systems used for handling this are mainly commercial, closed source, expensive, and heavyweight and there is a need for a flexible lightweight open system for handling plate design, and validation and preparation of data. A Bioclipse plugin consisting of a client part and a relational database was constructed. A multiple-step plate layout point-and-click interface was implemented inside Bioclipse. The system contains a data validation step, where outliers can be removed, and finally a plate report with all relevant calculated data, including dose-response curves. Brunn is capable of handling the data from microplate assays. It can create dose-response curves and calculate IC50 values. Using a system of this sort facilitates work in the laboratory. Being able to reuse already constructed plates and plate layouts by starting out from an earlier step in the plate layout design process saves time and cuts down on error sources.

  12. Determination of suitable drying curve model for bread moisture loss during baking

    NASA Astrophysics Data System (ADS)

    Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.

    2013-03-01

    This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.

  13. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehnke, E McKenzie; DeMarco, J; Steers, J

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less

  14. Modeling streamflow from coupled airborne laser scanning and acoustic Doppler current profiler data

    USGS Publications Warehouse

    Norris, Lam; Kean, Jason W.; Lyon, Steve

    2016-01-01

    The rating curve enables the translation of water depth into stream discharge through a reference cross-section. This study investigates coupling national scale airborne laser scanning (ALS) and acoustic Doppler current profiler (ADCP) bathymetric survey data for generating stream rating curves. A digital terrain model was defined from these data and applied in a physically based 1-D hydraulic model to generate rating curves for a regularly monitored location in northern Sweden. Analysis of the ALS data showed that overestimation of the streambank elevation could be adjusted with a root mean square error (RMSE) block adjustment using a higher accuracy manual topographic survey. The results of our study demonstrate that the rating curve generated from the vertically corrected ALS data combined with ADCP data had lower errors (RMSE = 0.79 m3/s) than the empirical rating curve (RMSE = 1.13 m3/s) when compared to streamflow measurements. We consider these findings encouraging as hydrometric agencies can potentially leverage national-scale ALS and ADCP instrumentation to reduce the cost and effort required for maintaining and establishing rating curves at gauging station sites similar to the Röån River.

  15. Evaluation of microwave landing system approaches in a wide-body transport simulator

    NASA Technical Reports Server (NTRS)

    Summers, L. G.; Feather, J. B.

    1992-01-01

    The objective of this study was to determine the suitability of flying complex curved approaches using the microwave landing system (MLS) with a wide-body transport aircraft. Fifty pilots in crews of two participated in the evaluation using a fixed-base simulator that emulated an MD-11 aircraft. Five approaches, consisting of one straight-in approach and four curved approaches, were flown by the pilots using a flight director. The test variables include the following: (1) manual and autothrottles; (2) wind direction; and (3) type of navigation display. The navigation display was either a map or a horizontal situation indicator (HSI). A complex wind that changed direction and speed with altitude, and included moderate turbulence, was used. Visibility conditions were Cat 1 or better. Subjective test data included pilot responses to questionnaires and pilot comments. Objective performance data included tracking accuracy, position error at decision height, and control activity. Results of the evaluation indicate that flying curved MLS approaches with a wide-body transport aircraft is operationally acceptable, depending upon the length of the final straight segment and the complexity of the approach.

  16. The Regulus occultation light curve and the real atmosphere of Venus

    NASA Technical Reports Server (NTRS)

    Veverka, J.; Wasserman, L.

    1974-01-01

    An inversion of the light curve observed during the July 7, 1959, occultation of Regulus by Venus leads to the conclusion that the light curve cannot be reconciled with models of the Venus atmosphere based on spacecraft observations. The event occurred in daylight and, under the subsequently difficult observation conditions, it seems likely that the Regulus occultation light curve is marred by a systematic errors in spite of the competence of the observers involved.

  17. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    PubMed

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  18. Complete characterization of the spasing (L-L) curve of a three-level quantum coherence enhanced spaser for design optimization

    NASA Astrophysics Data System (ADS)

    Kumarapperuma, Lakshitha; Premaratne, Malin; Jha, Pankaj K.; Stockman, Mark I.; Agrawal, Govind P.

    2018-05-01

    We demonstrate that it is possible to derive an approximate analytical expression to characterize the spasing (L-L) curve of a coherently enhanced spaser with 3-level gain-medium chromophores. The utility of this solution stems from the fact that it enables optimization of the large parameter space associated with spaser designing, a functionality not offered by the methods currently available in the literature. This is vital for the advancement of spaser technology towards the level of device realization. Owing to the compact nature of the analytical expressions, our solution also facilitates the grouping and identification of key processes responsible for the spasing action, whilst providing significant physical insights. Furthermore, we show that our expression generates results within 0.1% error compared to numerically obtained results for pumping rates higher than the spasing threshold, thereby drastically reducing the computational cost associated with spaser designing.

  19. Pitfalls of inferring annual mortality from inspection of published survival curves.

    PubMed

    Singer, R B

    1994-01-01

    In many FU articles currently published, results are given primarily in the form of graphs of survival curves, rather than in the form of life table data. Sometimes the authors may comment on the slope of the survival curve as though it were equal to the annual mortality rate (after reversal of the minus sign to a plus sign). Even if no comment of this sort is made, medical directors and underwriters may be tempted to think along similar lines in trying to interpret the significance of the survival curve in terms of mortality. However it is a very serious error of life table methodology to conceive of mortality rate as equal to the negative slope of the survival curve. The nature of the error is demonstrated in this article. An annual mortality rate derived from the survival curve actually depends on two variables: a quotient with the negative slope (sign reversed), delta P/ delta as the numerator, and the survival rate, P, itself as the denominator. The implications of this relationship are discussed. If there are two "parallel" survival curves with the same slope at a given time duration, the lower curve will have a higher mortality rate than the upper curve. A constant slope with increasing duration means that the annual mortality rate also increases with duration. Some characteristics of high initial mortality are also discussed and their relation to different units of FU time.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Estimating Dense Cardiac 3D Motion Using Sparse 2D Tagged MRI Cross-sections*

    PubMed Central

    Ardekani, Siamak; Gunter, Geoffrey; Jain, Saurabh; Weiss, Robert G.; Miller, Michael I.; Younes, Laurent

    2015-01-01

    In this work, we describe a new method, an extension of the Large Deformation Diffeomorphic Metric Mapping to estimate three-dimensional deformation of tagged Magnetic Resonance Imaging Data. Our approach relies on performing non-rigid registration of tag planes that were constructed from set of initial reference short axis tag grids to a set of deformed tag curves. We validated our algorithm using in-vivo tagged images of normal mice. The mapping allows us to compute root mean square distance error between simulated tag curves in a set of long axis image planes and the acquired tag curves in the same plane. Average RMS error was 0.31±0.36(SD) mm, which is approximately 2.5 voxels, indicating good matching accuracy. PMID:25571140

  1. A mathematical approach to beam matching

    PubMed Central

    Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N

    2013-01-01

    Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874

  2. Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system. [White Sands, New Mexico

    NASA Technical Reports Server (NTRS)

    Palmer, J. M. (Principal Investigator); Slater, P. N.

    1984-01-01

    The newly built Caste spectropolarimeters gave satisfactory performance during tests in the solar radiometer and helicopter modes. A bandwidth normalization technique based on analysis of the moments of the spectral responsivity curves was used to analyze the spectral bands of the MSS and TM subsystems of LANDSAT 4 and 5 satellites. Results include the effective wavelength, the bandpass, the wavelength limits, and the normalized responsivity for each spectral channel. Temperature coefficients for TM PF channel 6 were also derived. The moments normalization method used yields sensor parameters whose derivation is independent of source characteristics (i.e., incident solar spectral irradiance, atmospheric transmittance, or ground reflectance). The errors expected using these parameters are lower than those expected using other normalization methods.

  3. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  4. A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface

    NASA Astrophysics Data System (ADS)

    Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo

    2016-09-01

    The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.

  5. Dealing with non-unique and non-monotonic response in particle sizing instruments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil

    2017-04-01

    A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.

  6. SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A

    2014-06-01

    Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less

  7. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  8. Comparison of dose response functions for EBT3 model GafChromic™ film dosimetry system.

    PubMed

    Aldelaijan, Saad; Devic, Slobodan

    2018-05-01

    Different dose response functions of EBT3 model GafChromic™ film dosimetry system have been compared in terms of sensitivity as well as uncertainty vs. error analysis. We also made an assessment of the necessity of scanning film pieces before and after irradiation. Pieces of EBT3 film model were irradiated to different dose values in Solid Water (SW) phantom. Based on images scanned in both reflection and transmission mode before and after irradiation, twelve different response functions were calculated. For every response function, a reference radiochromic film dosimetry system was established by generating calibration curve and by performing the error vs. uncertainty analysis. Response functions using pixel values from the green channel demonstrated the highest sensitivity in both transmission and reflection mode. All functions were successfully fitted with rational functional form, and provided an overall one-sigma uncertainty of better than 2% for doses above 2 Gy. Use of pre-scanned images to calculate response functions resulted in negligible improvement in dose measurement accuracy. Although reflection scanning mode provides higher sensitivity and could lead to a more widespread use of radiochromic film dosimetry, it has fairly limited dose range and slightly increased uncertainty when compared to transmission scan based response functions. Double-scanning technique, either in transmission or reflection mode, shows negligible improvement in dose accuracy as well as a negligible increase in dose uncertainty. Normalized pixel value of the images scanned in transmission mode shows linear response in a dose range of up to 11 Gy. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. The effects of moderate alcohol concentrations on driving and cognitive performance during ascending and descending blood alcohol concentrations.

    PubMed

    Starkey, Nicola J; Charlton, Samuel G

    2014-07-01

    Alcohol has an adverse effect on driving performance; however, the effects of moderate doses on different aspects of the driving task are inconsistent and differ across the intoxication curve. This research aimed to investigate driving and cognitive performance asymmetries (acute tolerance and acute protracted error) accompanying the onset and recovery from moderate alcohol consumption. Sixty-one participants received a placebo, medium (target blood alcohol concentration [BAC] 0.05 mg/ml) or high (target BAC 0.08 mg/ml) dose of alcohol. Participants completed a simulated drive, cognitive tests and subjective rating scales five times over a 3.5 h period. When ascending and descending BACs (0.05 and 0.09 mg/ml) were compared participants' self-ratings of intoxication and willingness to drive showed acute tolerance. Acute protracted errors were observed for response speed, maze learning errors, time exceeding the speed limit and exaggerated steering responses to hazards. Participants' estimates of their level of intoxication were poorly related to their actual BAC levels (and hence degree of impairment), and various aspects of driving and cognitive performance worsened during descending BACs. This indicates that drivers are not good at judging their fitness to drive after drinking only moderate amounts of alcohol and suggests an important focus for public education regarding alcohol and driving. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Autoimmunity: a decision theory model.

    PubMed Central

    Morris, J A

    1987-01-01

    Concepts from statistical decision theory were used to analyse the detection problem faced by the body's immune system in mounting immune responses to bacteria of the normal body flora. Given that these bacteria are potentially harmful, that there can be extensive cross reaction between bacterial antigens and host tissues, and that the decisions are made in uncertainty, there is a finite chance of error in immune response leading to autoimmune disease. A model of ageing in the immune system is proposed that is based on random decay in components of the decision process, leading to a steep age dependent increase in the probability of error. The age incidence of those autoimmune diseases which peak in early and middle life can be explained as the resultant of two processes: an exponentially falling curve of incidence of first contact with common bacteria, and a rapidly rising error function. Epidemiological data on the variation of incidence with social class, sibship order, climate and culture can be used to predict the likely site of carriage and mode of spread of the causative bacteria. Furthermore, those autoimmune diseases precipitated by common viral respiratory tract infections might represent reactions to nasopharyngeal bacterial overgrowth, and this theory can be tested using monoclonal antibodies to search the bacterial isolates for cross reacting antigens. If this model is correct then prevention of autoimmune disease by early exposure to low doses of bacteria might be possible. PMID:3818985

  11. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  12. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  14. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott; Haslauer, Claus P.; Cirpka, Olaf A.

    2017-01-05

    The key points of this presentation were to approach the problem of linking breakthrough curve shape (RP-CTRW transition distribution) to structural parameters from a Monte Carlo approach and to use the Monte Carlo analysis to determine any empirical error

  15. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  16. Evaluation of B1 inhomogeneity effect on DCE-MRI data analysis of brain tumor patients at 3T.

    PubMed

    Sengupta, Anirban; Gupta, Rakesh Kumar; Singh, Anup

    2017-12-02

    Dynamic-contrast-enhanced (DCE) MRI data acquired using gradient echo based sequences is affected by errors in flip angle (FA) due to transmit B 1 inhomogeneity (B 1 inh). The purpose of the study was to evaluate the effect of B 1 inh on quantitative analysis of DCE-MRI data of human brain tumor patients and to evaluate the clinical significance of B 1 inh correction of perfusion parameters (PPs) on tumor grading. An MRI study was conducted on 35 glioma patients at 3T. The patients had histologically confirmed glioma with 23 high-grade (HG) and 12 low-grade (LG). Data for B 1 -mapping, T 1 -mapping and DCE-MRI were acquired. Relative B 1 maps (B 1rel ) were generated using the saturated-double-angle method. T 1 -maps were computed using the variable flip-angle method. Post-processing was performed for conversion of signal-intensity time (S(t)) curve to concentration-time (C(t)) curve followed by tracer kinetic analysis (K trans , Ve, Vp, Kep) and first pass analysis (CBV, CBF) using the general tracer-kinetic model. DCE-MRI data was analyzed without and with B 1 inh correction and errors in PPs were computed. Receiver-operating-characteristic (ROC) analysis was performed on HG and LG patients. Simulations were carried out to understand the effect of B 1 inhomogeneity on DCE-MRI data analysis in a systematic way. S(t) curves mimicking those in tumor tissue, were generated and FA errors were introduced followed by error analysis of PPs. Dependence of FA-based errors on the concentration of contrast agent and on the duration of DCE-MRI data was also studied. Simulations were also done to obtain K trans of glioma patients at different B 1rel values and see whether grading is affected or not. Current study shows that B 1rel value higher than nominal results in an overestimation of C(t) curves as well as derived PPs and vice versa. Moreover, at same B 1rel values, errors were large for larger values of C(t). Simulation results showed that grade of patients can change because of B 1 inh. B 1 inh in the human brain at 3T-MRI can introduce substantial errors in PPs derived from DCE-MRI data that might affect the accuracy of tumor grading, particularly for border zone cases. These errors can be mitigated using B 1 inh correction during DCE-MRI data analysis.

  17. High frequency observations of Iapetus on the Green Bank Telescope aided by improvements in understanding the telescope response to wind

    NASA Astrophysics Data System (ADS)

    Ries, Paul A.

    2012-05-01

    The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.

  18. High pressure melting curve of platinum up to 35 GPa

    NASA Astrophysics Data System (ADS)

    Patel, Nishant N.; Sunder, Meenakshi

    2018-04-01

    Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.

  19. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  20. SU-C-207B-06: Comparison of Registration Methods for Modeling Pathologic Response of Esophageal Cancer to Chemoradiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riyahi, S; Choi, W; Bhooshan, N

    2016-06-15

    Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  1. The association between frequency of self-reported medical errors and anesthesia trainee supervision: a survey of United States anesthesiology residents-in-training.

    PubMed

    De Oliveira, Gildasio S; Rahmani, Rod; Fitzgerald, Paul C; Chang, Ray; McCarthy, Robert J

    2013-04-01

    Poor supervision of physician trainees can be detrimental not only to resident education but also to patient care and safety. Inadequate supervision has been associated with more frequent deaths of patients under the care of junior residents. We hypothesized that residents reporting more medical errors would also report lower quality of supervision scores than the ones with lower reported medical errors. The primary objective of this study was to evaluate the association between the frequency of medical errors reported by residents and their perceived quality of faculty supervision. A cross-sectional nationwide survey was sent to 1000 residents randomly selected from anesthesiology training departments across the United States. Residents from 122 residency programs were invited to participate, the median (interquartile range) per institution was 7 (4-11). Participants were asked to complete a survey assessing demography, perceived quality of faculty supervision, and perceived causes of inadequate perceived supervision. Responses to the statements "I perform procedures for which I am not properly trained," "I make mistakes that have negative consequences for the patient," and "I have made a medication error (drug or incorrect dose) in the last year" were used to assess error rates. Average supervision scores were determined using the De Oliveira Filho et al. scale and compared among the frequency of self-reported error categories using the Kruskal-Wallis test. Six hundred four residents responded to the survey (60.4%). Forty-five (7.5%) of the respondents reported performing procedures for which they were not properly trained, 24 (4%) reported having made mistakes with negative consequences to patients, and 16 (3%) reported medication errors in the last year having occurred multiple times or often. Supervision scores were inversely correlated with the frequency of reported errors for all 3 questions evaluating errors. At a cutoff value of 3, supervision scores demonstrated an overall accuracy (area under the curve) (99% confidence interval) of 0.81 (0.73-0.86), 0.89 (0.77-0.95), and 0.93 (0.77-0.98) for predicting a response of multiple times or often to the question of performing procedures for which they were not properly trained, reported mistakes with negative consequences to patients, and reported medication errors in the last year, respectively. Anesthesiology trainees who reported a greater incidence of medical errors with negative consequences to patients and drug errors also reported lower scores for supervision by faculty. Our findings suggest that further studies of the association between supervision and patient safety are warranted. (Anesth Analg 2013;116:892-7).

  2. Analysis of the sources of uncertainty for EDR2 film‐based IMRT quality assurance

    PubMed Central

    Shi, Chengyu; Papanikolaou, Nikos; Yan, Yulong; Weng, Xuejun; Jiang, gyu

    2006-01-01

    In our institution, patient‐specific quality assurance (QA) for intensity‐modulated radiation therapy (IMRT) is usually performed by measuring the dose to a point using an ion chamber and by measuring the dose to a plane using film. In order to perform absolute dose comparison measurements using film, an accurate calibration curve should be used. In this paper, we investigate the film response curve uncertainty factors, including film batch differences, film processor temperature effect, film digitization, and treatment unit. In addition, we reviewed 50 patient‐specific IMRT QA procedures performed in our institution in order to quantify the sources of error in film‐based dosimetry. Our study showed that the EDR2 film dosimetry can be done with less than 3% uncertainty. The EDR2 film response was not affected by the choice of treatment unit provided the nominal energy was the same. This investigation of the different sources of uncertainties in the film calibration procedure can provide a better understanding of the film‐based dosimetry and can improve quality control for IMRT QA. PACS numbers: 87.86.Cd, 87.53.Xd, 87.57.Nk PMID:17533329

  3. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  4. A new method to make 2-D wear measurements less sensitive to projection differences of cemented THAs.

    PubMed

    The, Bertram; Flivik, Gunnar; Diercks, Ron L; Verdonschot, Nico

    2008-03-01

    Wear curves from individual patients often show unexplained irregular wear curves or impossible values (negative wear). We postulated errors of two-dimensional wear measurements are mainly the result of radiographic projection differences. We tested a new method that makes two-dimensional wear measurements less sensitive for radiograph projection differences of cemented THAs. The measurement errors that occur when radiographically projecting a three-dimensional THA were modeled. Based on the model, we developed a method to reduce the errors, thus approximating three-dimensional linear wear values, which are less sensitive for projection differences. An error analysis was performed by virtually simulating 144 wear measurements under varying conditions with and without application of the correction: the mean absolute error was reduced from 1.8 mm (range, 0-4.51 mm) to 0.11 mm (range, 0-0.27 mm). For clinical validation, radiostereometric analysis was performed on 47 patients to determine the true wear at 1, 2, and 5 years. Subsequently, wear was measured on conventional radiographs with and without the correction: the overall occurrence of errors greater than 0.2 mm was reduced from 35% to 15%. Wear measurements are less sensitive to differences in two-dimensional projection of the THA when using the correction method.

  5. A complete representation of uncertainties in layer-counted paleoclimatic archives

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2017-09-01

    Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.

  6. Trends in the suspended-sediment yields of coastal rivers of northern California, 1955–2010

    USGS Publications Warehouse

    Warrick, J.A.; Madej, Mary Ann; Goñi, M. A.; Wheatcroft, R.A.

    2013-01-01

    Time-dependencies of suspended-sediment discharge from six coastal watersheds of northern California – Smith River, Klamath River, Trinity River, Redwood Creek, Mad River, and Eel River – were evaluated using monitoring data from 1955 to 2010. Suspended-sediment concentrations revealed time-dependent hysteresis and multi-year trends. The multi-year trends had two primary patterns relative to river discharge: (i) increases in concentration resulting from both land clearing from logging and the flood of record during December 1964 (water year 1965), and (ii) continual decreases in concentration during the decades following this flood. Data from the Eel River revealed that changes in suspended-sediment concentrations occurred for all grain-size fractions, but were most pronounced for the sand fraction. Because of these changes, the use of bulk discharge-concentration relationships (i.e., “sediment rating curves”) without time-dependencies in these relationships resulted in substantial errors in sediment load estimates, including 2.5-fold over-prediction of Eel River sediment loads since 1979. We conclude that sediment discharge and sediment discharge relationships (such as sediment rating curves) from these coastal rivers have varied substantially with time in response to land use and climate. Thus, the use of historical river sediment data and sediment rating curves without considerations for time-dependent trends may result in significant errors in sediment yield estimates from the globally-important steep, small watersheds.

  7. Soft material adhesion characterization for in vivo locomotion of robotic capsule endoscopes: Experimental and modeling results.

    PubMed

    Kern, Madalyn D; Ortega Alcaide, Joan; Rentschler, Mark E

    2014-11-01

    The objective of this work is to validate an experimental method and nondimensional model for characterizing the normal adhesive response between a polyvinyl chloride based synthetic biological tissue substrate and a flat, cylindrical probe with a smooth polydimethylsiloxane (PDMS) surface. The adhesion response is a critical mobility design parameter of a Robotic Capsule Endoscope (RCE) using PDMS treads to provide mobility to travel through the gastrointestinal tract for diagnostic purposes. Three RCE design characteristics were chosen as input parameters for the normal adhesion testing: pre-load, dwell time and separation rate. These parameters relate to the RCE׳s cross sectional dimension, tread length, and tread speed, respectively. An inscribed central composite design (CCD) prescribed 34 different parameter configurations to be tested. The experimental adhesion response curves were nondimensionalized by the maximum stress and total displacement values for each test configuration and a mean nondimensional curve was defined with a maximum relative error of 5.6%. A mathematical model describing the adhesion behavior as a function of the maximum stress and total displacement was developed and verified. A nonlinear regression analysis was done on the maximum stress and total displacement parameters and equations were defined as a function of the RCE design parameters. The nondimensional adhesion model is able to predict the adhesion curve response of any test configuration with a mean R(2) value of 0.995. Eight additional CCD studies were performed to obtain a qualitative understanding of the impact of tread contact area and synthetic material substrate stiffness on the adhesion response. These results suggest that the nondimensionalization technique for analyzing the adhesion data is sufficient for all values of probe radius and substrate stiffness within the bounds tested. This method can now be used for RCE tread design optimization given a set of environmental conditions for device operation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A Modified Formula of the First-order Approximation for Assessing the Contribution of Climate Change to Runoff Based on the Budyko Hypothesis

    NASA Astrophysics Data System (ADS)

    Liu, W.; Ning, T.; Han, X.

    2015-12-01

    The climate elasticity based on the Budyko curves has been widely used to evaluate the hydrological responses to climate change. The Mezentsev-Choudhury-Yang formula is one of the representative analytical equations for Budyko curves. Previous researches mostly used the variation of runoff (R) caused by the changes of annual precipitation (P) and potential evapotranspiration (ET0) as the hydrological response to climate change and evaluated it by a first-order approximation in a form of total differential, the major components of which include the partial derivatives of R to P and ET0, as well as climate elasticity on this basis. Based on analytic derivation and the characteristics of Budyko curves, this study proposed a modified formula of the first-order approximation to reduce the errors from the approximation. In the calculation of partial derivatives and climate elasticity, the values of P and ET0 were taken to the sum of their base values and half increments, respectively. The calculation was applied in 33 catchments of the Hai River basin in China and the results showed that the mean absolute value of relative error of approximated runoff change decreased from 8.4% to 0.4% and the maximum value, from 23.4% to 1.3%. Given the variation values of P, ET0 and the controlling parameter (n), the modified formula can exactly quantify the contributions of climate fluctuation and underlying surface change to runoff. Taking the Murray-Darling basin in Australia as an example of the contribution calculated by the modified formula, the reductions of mean annual runoff caused by changes of P, ET0 and n from 1895-1996 to 1997-2006 were 2.6, 0.6 and 2.9 mm, respectively, and the sum of them was 6.1 mm, which was completely consistent with the observed runoff. The modified formula of the first-order approximation proposed in this study can be not only used to assess the contributions of climate change to the runoff, but also widely used to analyze the effects of similar issues based on a certain functional relationship in hydrological and climate changes.

  9. Control of thumb force using surface functional electrical stimulation and muscle load sharing

    PubMed Central

    2013-01-01

    Background Stroke survivors often have difficulties in manipulating objects with their affected hand. Thumb control plays an important role in object manipulation. Surface functional electrical stimulation (FES) can assist movement. We aim to control the 2D thumb force by predicting the sum of individual muscle forces, described by a sigmoidal muscle recruitment curve and a single force direction. Methods Five able bodied subjects and five stroke subjects were strapped in a custom built setup. The forces perpendicular to the thumb in response to FES applied to three thumb muscles were measured. We evaluated the feasibility of using recruitment curve based force vector maps in predicting output forces. In addition, we developed a closed loop force controller. Load sharing between the three muscles was used to solve the redundancy problem having three actuators to control forces in two dimensions. The thumb force was controlled towards target forces of 0.5 N and 1.0 N in multiple directions within the individual’s thumb work space. Hereby, the possibilities to use these force vector maps and the load sharing approach in feed forward and feedback force control were explored. Results The force vector prediction of the obtained model had small RMS errors with respect to the actual measured force vectors (0.22±0.17 N for the healthy subjects; 0.17±0.13 N for the stroke subjects). The stroke subjects showed a limited work range due to limited force production of the individual muscles. Performance of feed forward control without feedback, was better in healthy subjects than in stroke subjects. However, when feedback control was added performances were similar between the two groups. Feedback force control lead, especially for the stroke subjects, to a reduction in stationary errors, which improved performance. Conclusions Thumb muscle responses to FES can be described by a single force direction and a sigmoidal recruitment curve. Force in desired direction can be generated through load sharing among redundant muscles. The force vector maps are subject specific and also suitable in feedforward and feedback control taking the individual’s available workspace into account. With feedback, more accurate control of muscle force can be achieved. PMID:24103414

  10. The effect of tropospheric fluctuations on the accuracy of water vapor radiometry

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1992-01-01

    Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.

  11. Pyrolysis Model Development for a Multilayer Floor Covering

    PubMed Central

    McKinnon, Mark B.; Stoliarov, Stanislav I.

    2015-01-01

    Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556

  12. On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change

    ERIC Educational Resources Information Center

    Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von

    2006-01-01

    We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…

  13. A methodology to reduce uncertainties in the high-flow portion of a rating curve

    USDA-ARS?s Scientific Manuscript database

    Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...

  14. Properties of SN1978K from multi-wavelength observations

    NASA Astrophysics Data System (ADS)

    Schlegel, Eric M.; Ryder, Stuart; Staveley-Smith, L.; Colbert, E.; Petre, R.; Dopita, M.; Campbell-Wilson, D.

    2000-06-01

    We update the light curves from the X-ray, optical, and radio bandpasses which we have assembled over the past decade, and present two observations in the ultraviolet using the Hubble Space Telescope Faint Object Spectrograph. The HRI X-ray light curve is constant within the errors over the entire observation period which is confirmed by ASCA GIS data obtained in 1993 and 1995. In the UV, we detected the Mg II doublet at 2800 Å and a line at ~3190 Å attributed to He I 3187 at SN1978K's position. The optical light curve is formally constant within the errors, although a slight upward trend may be present. The radio light curve continues its steep decline. The longer time span of our radio observations compared to previous studies shows that SN1978K belongs in the class of highly X-ray and radio-luminous supernovae. The Mg II doublet flux ratio implies the quantity of line optical depth times density is ~1014 cm-3. The emission site must lie in the shocked gas. .

  15. Classification of resistance to passive motion using minimum probability of error criterion.

    PubMed

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  16. Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Sen, S.

    2016-12-01

    Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.

  17. The effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1973-01-01

    The analysis of ion data from retarding potential analyzers (RPA's) is generally done under the planar approximation, which assumes that the grid transparency is constant with angle of incidence and that all ions reaching the plane of the collectors are collected. These approximations are not valid for situations in which the ion thermal velocity is comparable to the vehicle velocity, causing ions to enter the RPA with high average transverse velocity. To investigate these effects, the current-voltage curves for H+ at 4000 K were calculated, taking into account the finite collector size and the variation of grid transparency with angle. These curves are then analyzed under the planar approximation. The results show that only small errors in temperature and density are introduced for an RPA with typical dimensions; and that even when the density error is substantial for non-typical dimensions, the temperature error remains minimal.

  18. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  19. Continuous slope-area discharge records in Maricopa County, Arizona, 2004–2012

    USGS Publications Warehouse

    Wiele, Stephen M.; Heaton, John W.; Bunch, Claire E.; Gardner, David E.; Smith, Christopher F.

    2015-12-29

    Analyses of sources of errors and the impact stage data errors have on calculated discharge time series are considered, along with issues in data reduction. Steeper, longer stream reaches are generally less sensitive to measurement error. Other issues considered are pressure transducer drawdown, capture of flood peaks with discrete stage data, selection of stage record for development of rating curves, and minimum stages for the calculation of discharge.

  20. Applications of data compression techniques in modal analysis for on-orbit system identification

    NASA Technical Reports Server (NTRS)

    Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim

    1992-01-01

    Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.

  1. Curves showing column strength of steel and duralumin tubing

    NASA Technical Reports Server (NTRS)

    Ross, Orrin E

    1929-01-01

    Given here are a set of column strength curves that are intended to simplify the method of determining the size of struts in an airplane structure when the load in the member is known. The curves will also simplify the checking of the strength of a strut if the size and length are known. With these curves, no computations are necessary, as in the case of the old-fashioned method of strut design. The process is so simple that draftsmen or others who are not entirely familiar with mechanics can check the strength of a strut without much danger of error.

  2. Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.

    PubMed

    Okubo, T; Shibata, H; Takishima, T

    1983-07-01

    By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.

  3. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  4. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  5. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary values for evaluating the STTP against the RP. Even though the STTP passed the simple gamma pass criteria, it was found failing at many locations when the derivatives were used as the boundary values. The proposed derivative-based method can identify a noisy curve and can prove to be a useful tool for improving the sensitivity of the gamma index. PMID:26865761

  6. Activities of mixtures of soil-applied herbicides with different molecular targets.

    PubMed

    Kaushik, Shalini; Streibig, Jens Carl; Cedergreen, Nina

    2006-11-01

    The joint action of soil-applied herbicide mixtures with similar or different modes of action has been assessed by using the additive dose model (ADM). The herbicides chlorsulfuron, metsulfuron-methyl, pendimethalin and pretilachlor, applied either singly or in binary mixtures, were used on rice (Oryza sativa L.). The growth (shoot) response curves were described by a logistic dose-response model. The ED50 values and their corresponding standard errors obtained from the response curves were used to test statistically if the shape of the isoboles differed from the reference model (ADM). Results showed that mixtures of herbicides with similar molecular targets, i.e. chlorsulfuron and metsulfuron (acetolactate synthase (ALS) inhibitors), and with different molecular targets, i.e. pendimethalin (microtubule assembly inhibitor) and pretilachlor (very long chain fatty acids (VLCFAs) inhibitor), followed the ADM. Mixing herbicides with different molecular targets gave different results depending on whether pretilachlor or pendimethalin was involved. In general, mixtures of pretilachlor and sulfonylureas showed synergistic interactions, whereas mixtures of pendimethalin and sulfonylureas exhibited either antagonistic or additive activities. Hence, there is a large potential for both increasing the specificity of herbicides by using mixtures and lowering the total dose for weed control, while at the same time delaying the development of herbicide resistance by using mixtures with different molecular targets. Copyright (c) 2006 Society of Chemical Industry.

  7. Global determination of rating curves in the Amazon basin from satellite altimetry

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Paiva, Rodrigo C. D.; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stéphane; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frédérique

    2014-05-01

    The Amazonian basin is the largest hydrological basin all over the world. Over the past few years, it has experienced an unusual succession of extreme droughts and floods, which origin is still a matter of debate. One of the major issues in understanding such events is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2009. The stage dataset is made of ~900 altimetry series at ENVISAT and Jason-2 virtual stations, sampling the stages over more than a hundred of rivers in the basin. Altimetry series span between 2002 and 2011. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are hydrologicaly meaningful throughout the entire Amazon basin. The rating curve parameters have been computed using an optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best value for the parameters together with their posterior probability distribution, allowing the determination of a credibility interval for calculated discharge. Also the error over discharges estimates from the MGB-IPH model is included in the rating curve determination. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach is more efficient than the determinist one. By using for the parameters prior credible intervals defined by the user, this method provides an estimate of best rating curve estimate without any unlikely parameter. Results were assessed trough the Nash Sutcliffe efficiency coefficient. Ens superior to 0.7 is found for most of the 920 virtual stations . From these results we were able to determinate a fully coherent map of river bed height, mean depth and Manning's roughness coefficient, information that can be reused in hydrological modeling. Bad results found at a few virtual stations are also of interest. For some sub-basins in the Andean piemont, the bad result confirms that the model failed to estimate discharges overthere. Other are found at tributary mouths experiencing backwater effects from the Amazon. Considering mean monthly slope at the virtual station in the rating curve equation, we obtain rated discharges much more consistent with modeled and measured ones, showing that it is now possible to obtain a meaningful rating curve in such critical areas.

  8. Spline curve matching with sparse knot sets

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2004-01-01

    This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...

  9. A methodology to reduce uncertainties in the high-flow portion of the rating curve for Goodwater Creek Watershed

    USDA-ARS?s Scientific Manuscript database

    Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...

  10. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  11. A revision of existing Karolinska Sleepiness Scale responses to light: A melanopic perspective.

    PubMed

    Hommes, Vanja; Giménez, Marina C

    2015-01-01

    A new photometric measure of light intensity that takes into account the relatively large contribution of the ipRGCs to the non-image forming (NIF) system was recently proposed. We set out to revise publications reporting on alertness scores as measured by the Karolinska Sleepiness Scale (KSS) under different light conditions in order to assess the extendibility of the equivalent-melanopic function to NIF responses in humans. The KSS response (-Δ KSS) to the different light conditions used on previous studies, preferably including a comparison to a dim light condition, was assessed. Based on the light descriptions of the different studies, the equivalent melanopic lux (m-illuminance) was calculated. The -Δ KSS was plotted against photopic-illuminance and m-illuminance, and fitted to a sigmoidal function already shown to described KSS responses to different light intensities. The root mean-squared error and r(2) were used as criteria to explain the best-describing light unit measurement. Studies that compared only the influence of light under otherwise same conditions and in which participants were not totally sleep deprived were included. Our results show that the effects of light on KSS are better explained by a melanopic unit measurement than by photopic lux. The present analysis allowed for the construction of a melanopic alertness response curve. This curve needs to be validated with appropriate designs. Nonetheless, it may serve as starting point for the development of hypothesis of predictions on the relative changes in KSS under a given condition due to changes in light properties.

  12. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface.

    PubMed

    Huo, Ju; Zhang, Guiyang; Yang, Ming

    2018-04-20

    This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000  mm×3000  mm×4000  mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.

  13. Alternate methods for FAAT S-curve generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, A.M.

    The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less

  14. Comment on ``Equation of state of aluminum nitride and its shock response'' [J. Appl. Phys. 76, 4077 (1994)

    NASA Astrophysics Data System (ADS)

    Rosenberg, Z.; Brar, N. S.

    1995-11-01

    A recent article by Dandekar, Abbate, and Frankel [J. Appl. Phys. 76, 4077 (1994)] reviews existing data on high-pressure properties of aluminum nitride (AlN) in an effort to build an equation of state for this material. A rather large portion of that article is devoted to the shear strength of AlN and, in particular, to our data of 1991 with longitudinal and lateral stress gauges [Z. Rosenberg, N. S. Brar, and S. J. Bless, J. Appl. Phys. 70, 167 (1991)]. Since our highest data point has an error of 1 GPa, much of the discussion and conclusions of Dandekar and co-workers are not relevant once this error in data reduction is corrected. We also discuss the relevance of our shear strength data for various issues, such as the phase transformation of AlN at 20 GPa and the general shape of Hugoniot curves for brittle solids.

  15. Highly precise acoustic calibration method of ring-shaped ultrasound transducer array for plane-wave-based ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Terada, Takahide; Yamanaka, Kazuhiro; Suzuki, Atsuro; Tsubota, Yushi; Wu, Wenjing; Kawabata, Ken-ichi

    2017-07-01

    Ultrasound computed tomography (USCT) is promising for a non-invasive, painless, operator-independent and quantitative system for breast-cancer screening. Assembly error, production tolerance, and aging-degradation variations of the hardwire components, particularly of plane-wave-based USCT systems, may hamper cost effectiveness, precise imaging, and robust operation. The plane wave is transmitted from a ring-shaped transducer array for receiving the signal at a high signal-to-noise-ratio and fast aperture synthesis. There are four signal-delay components: response delays in the transmitters and receivers and propagation delays depending on the positions of the transducer elements and their directivity. We developed a highly precise calibration method for calibrating these delay components and evaluated it with our prototype plane-wave-based USCT system. Our calibration method was found to be effective in reducing delay errors. Gaps and curves were eliminated from the plane wave, and echo images of wires were sharpened in the entire imaging area.

  16. AAA gunnermodel based on observer theory. [predicting a gunner's tracking response

    NASA Technical Reports Server (NTRS)

    Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.

    1978-01-01

    The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.

  17. Dynamic linear models to explore time-varying suspended sediment-discharge rating curves

    NASA Astrophysics Data System (ADS)

    Ahn, Kuk-Hyun; Yellen, Brian; Steinschneider, Scott

    2017-06-01

    This study presents a new method to examine long-term dynamics in sediment yield using time-varying sediment-discharge rating curves. Dynamic linear models (DLMs) are introduced as a time series filter that can assess how the relationship between streamflow and sediment concentration or load changes over time in response to a wide variety of natural and anthropogenic watershed disturbances or long-term changes. The filter operates by updating parameter values using a recursive Bayesian design that responds to 1 day-ahead forecast errors while also accounting for observational noise. The estimated time series of rating curve parameters can then be used to diagnose multiscale (daily-decadal) variability in sediment yield after accounting for fluctuations in streamflow. The technique is applied in a case study examining changes in turbidity load, a proxy for sediment load, in the Esopus Creek watershed, part of the New York City drinking water supply system. The results show that turbidity load exhibits a complex array of variability across time scales. The DLM highlights flood event-driven positive hysteresis, where turbidity load remained elevated for months after large flood events, as a major component of dynamic behavior in the rating curve relationship. The DLM also produces more accurate 1 day-ahead loading forecasts compared to other static and time-varying rating curve methods. The results suggest that DLMs provide a useful tool for diagnosing changes in sediment-discharge relationships over time and may help identify variability in sediment concentrations and loads that can be used to inform dynamic water quality management.

  18. Computation and measurement of cell decision making errors using single cell data

    PubMed Central

    Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali

    2017-01-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950

  19. Computation and measurement of cell decision making errors using single cell data.

    PubMed

    Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali

    2017-04-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.

  20. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  1. LDPC Codes--Structural Analysis and Decoding Techniques

    ERIC Educational Resources Information Center

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  2. Argo Development Program.

    DTIC Science & Technology

    1986-06-01

    nonlinear form and account for uncertainties in model parameters, structural simplifications of the model, and disturbances. This technique summarizes...SHARPS system. *The take into account the coupling between axes two curves are nearly identical, except that the without becoming unwieldy. The low...are mainly caused by errors and control errors and accounts for the bandwidth limitations and the simulated current. observed offsets. The overshoot

  3. A method for determination of [Fe3+]/[Fe2+] ratio in superparamagnetic iron oxide

    NASA Astrophysics Data System (ADS)

    Jiang, Changzhao; Yang, Siyu; Gan, Neng; Pan, Hongchun; Liu, Hong

    2017-10-01

    Superparamagnetic iron oxide nanoparticles (SPION), as a kind of nanophase materials, are widely used in biomedical application, such as magnetic resonance imaging (MRI), drug delivery, and magnetic field assisted therapy. The magnetic property of SPION has close connection with its crystal structure, namely it is related to the ratio of Fe3+ and Fe2+ which form the SPION. So a simple way to determine the content of the Fe3+ and Fe2+ is important for researching the property of SPION. This review covers a method for determination of the Fe3+ and Fe2+ ratio in SPION by UV-vis spectrophotometry based the reaction of Fe2+ and 1,10-phenanthroline. The standard curve of Fe with R2 = 0.9999 is used for determination the content of Fe2+ and total iron with 2.5 mL 0.01% (w/v) SPION digested by HCl, pH = 4.30 HOAc-NaAc buffer 10 mL, 0.01% (w/v) 1,10-phenanthroline 5 mL and 10% (w/v) ascorbic acid 1 mL for total iron determine independently. But the presence of Fe3+ interfere with obtaining the actual value of Fe2+ (the error close to 9%). We designed a calibration curve to eliminate the error by devising a series of solution of different ratio of [Fe3+]/[Fe2+], and obtain the calibration curve. Through the calibration curve, the error between the measured value and the actual value can be reduced to 0.4%. The R2 of linearity of the method is 0.99441 and 0.99929 for Fe2+ and total iron respectively. The error of accuracy of recovery and precision of inter-day and intra-day are both lower than 2%, which can prove the reliability of the determination method.

  4. Effects of mistuning and matrix structure on the topology of frequency response curves

    NASA Technical Reports Server (NTRS)

    Afolabi, Dare

    1989-01-01

    The stability of a frequency response curve under mild perturbations of the system's matrix is investigated. Using recent developments in the theory of singularities of differentiable maps, it is shown that the stability of a response curve depends on the structure of the system's matrix. In particular, the frequency response curves of a cylic system are shown to be unstable. Consequently, slight parameter variations engendered by mistuning will induce a significant difference in the topology of the forced response curves, if the mistuning transformation crosses the bifurcation set.

  5. Publisher Correction: Tunnelling spectroscopy of gate-induced superconductivity in MoS2

    NASA Astrophysics Data System (ADS)

    Costanzo, Davide; Zhang, Haijing; Reddy, Bojja Aditya; Berger, Helmuth; Morpurgo, Alberto F.

    2018-06-01

    In the version of this Article originally published, an error during typesetting led to the curve in Fig. 2a being shifted to the right, and the curves in the inset of Fig. 2a being displaced. The figure has now been corrected in all versions of the Article; the original and corrected Fig. 2a are shown below.

  6. Simulation of relationship between river discharge and sediment yield in the semi-arid river watersheds

    NASA Astrophysics Data System (ADS)

    Khaleghi, Mohammad Reza; Varvani, Javad

    2018-02-01

    Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.

  7. TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION ERRORS: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, C.; Aldering, G.; Aragon, C.

    2015-02-10

    We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a givenmore » supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.« less

  8. Bayesian inference of Calibration curves: application to archaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lanos, P.

    2003-04-01

    The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.

  9. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  10. A refinement of the combination equations for evaporation

    USGS Publications Warehouse

    Milly, P.C.D.

    1991-01-01

    Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.

  11. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Measuring a diffusion coefficient by single-particle tracking: statistical analysis of experimental mean squared displacement curves.

    PubMed

    Ernst, Dominique; Köhler, Jürgen

    2013-01-21

    We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.

  13. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  14. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  15. Yeast and mammalian metabolism continuous monitoring by using pressure recording as an assessment technique for xenobiotic agent effects

    NASA Astrophysics Data System (ADS)

    Milani, Marziale; Ballerini, Monica; Ferraro, Lorenzo; Marelli, E.; Mazza, Francesca; Zabeo, Matteo

    2002-06-01

    Our work is devoted to the study of Saccharomyces cerevisiae and human lymphocytes cellular metabolism in order to develop a reference model to assess biological systems responses to chemical or physical agents exposure. CO2 variations inside test-tubes are measured by differential pressure sensors; pressure values are subsequently converted in voltage. The system allows to test up to 16 samples at the same time. Sampling manages up to 100 acquisitions per second. Values are recorded by a data acquisition card connected to a computer. This procedure leads to a standard curve (pressure variation versus time), typical of the cellular line, that describe cellular metabolism. The longest time lapse used is of 170 h. Different phases appear in this curve: an initial growth up to a maximum, followed by a decrement that leads to a typical depression (pressure value inside the test-tubes is lower than the initial one) after about 35 h from the beginning of yeast cells. The curve is reproducible within an experimental error of 4%. The analysis of many samples and the low cost of the devices allow a good statistical significance of the data. In particular as a test we will compare two sterilizing agents effects: UV radiation and amuchina.

  16. A study on suppressing transmittance fluctuations for air-gapped Glan-type polarizing prisms

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanfa; Li, Dailin; Zhu, Huafeng; Li, Chuanzhi; Jiao, Zhiyong; Wang, Ning; Xu, Zhaopeng; Wang, Xiumin; Song, Lianke

    2018-05-01

    Light intensity transmittance is a key parameter for the design of polarizing prisms, while sometimes its experimental curves based on spatial incident angle presents periodical fluctuations. Here, we propose a novel method for completely suppressing these fluctuations via setting a glued error angle in the air gap of Glan-Taylor prisms. The proposal consists of: an accurate formula of the intensity transmittance for Glan-Taylor prisms, a numerical simulation and a contrast experiment of Glan-Taylor prisms for analyzing the causes of the fluctuations, and a simple method for accurately measuring the glued error angle. The result indicates that when the setting glued error angle is larger than the critical angle for a certain polarizing prism, the fluctuations can be completely suppressed, and a smooth intensity transmittance curve can be obtained. Besides, the critical angle in the air gap for suppressing the fluctuations is decreased with the increase of beam spot size. This method has the advantage of having less demand for the prism position in optical systems.

  17. Ultrasound-guided three-dimensional needle steering in biological tissue with curved surfaces

    PubMed Central

    Abayazid, Momen; Moreira, Pedro; Shahriari, Navid; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak

    2015-01-01

    In this paper, we present a system capable of automatically steering a bevel-tipped flexible needle under ultrasound guidance toward a physical target while avoiding a physical obstacle embedded in gelatin phantoms and biological tissue with curved surfaces. An ultrasound pre-operative scan is performed for three-dimensional (3D) target localization and shape reconstruction. A controller based on implicit force control is developed to align the transducer with curved surfaces to assure the maximum contact area, and thus obtain an image of sufficient quality. We experimentally investigate the effect of needle insertion system parameters such as insertion speed, needle diameter and bevel angle on target motion to adjust the parameters that minimize the target motion during insertion. A fast sampling-based path planner is used to compute and periodically update a feasible path to the target that avoids obstacles. We present experimental results for target reconstruction and needle insertion procedures in gelatin-based phantoms and biological tissue. Mean targeting errors of 1.46 ± 0.37 mm, 1.29 ± 0.29 mm and 1.82 ± 0.58 mm are obtained for phantoms with inclined, curved and combined (inclined and curved) surfaces, respectively, for insertion distance of 86–103 mm. The achieved targeting errors suggest that our approach is sufficient for targeting lesions of 3 mm radius that can be detected using clinical ultrasound imaging systems. PMID:25455165

  18. Author Correction: Re-designing Interleukin-12 to enhance its safety and potential as an anti-tumor immunotherapeutic agent.

    PubMed

    Wang, Pengju; Li, Xiaozhu; Wang, Jiwei; Gao, Dongling; Li, Yuenan; Li, Haoze; Chu, Yongchao; Zhang, Zhongxian; Liu, Hongtao; Jiang, Guozhong; Cheng, Zhenguo; Wang, Shengdian; Dong, Jianzeng; Feng, Baisui; Chard, Louisa S; Lemoine, Nicholas R; Wang, Yaohe

    2018-01-10

    The originally published version of this Article contained errors in Figure 4. In panel b, the square and diamond labels associated with the uppermost survival curve were incorrectly displayed as 'n' and 'u', respectively. These errors have now been corrected in both the PDF and HTML versions of the Article.

  19. Are driving and overtaking on right curves more dangerous than on left curves?

    PubMed

    Othman, Sarbaz; Thomson, Robert; Lannér, Gunnar

    2010-01-01

    It is well known that crashes on horizontal curves are a cause for concern in all countries due to the frequency and severity of crashes at curves compared to road tangents. A recent study of crashes in western Sweden reported a higher rate of crashes in right curves than left curves. To further understand this result, this paper reports the results of novel analyses of the responses of vehicles and drivers during negotiating and overtaking maneuvers on curves for right hand traffic. The overall objectives of the study were to find road parameters for curves that affect vehicle dynamic responses, to analyze these responses during overtaking maneuvers on curves, and to link the results with driver behavior for different curve directions. The studied road features were speed, super-elevation, radius and friction including their interactions, while the analyzed vehicle dynamic factors were lateral acceleration and yaw angular velocity. A simulation program, PC-Crash, has been used to simulate road parameters and vehicle response interaction in curves. Overtaking maneuvers have been simulated for all road feature combinations in a total of 108 runs. Analysis of variances (ANOVA) was performed, using two sided randomized block design, to find differences in vehicle responses for the curve parameters. To study driver response, a field test using an instrumented vehicle and 32 participants was reviewed as it contained longitudinal speed and acceleration data for analysis. The simulation results showed that road features affect overtaking performance in right and left curves differently. Overtaking on right curves was sensitive to radius and the interaction of radius with road condition; while overtaking on left curves was more sensitive to super-elevation. Comparisons of lateral acceleration and yaw angular velocity during these maneuvers showed different vehicle response configurations depending on curve direction and maneuver path. The field test experiments also showed that drivers behave differently depending on the curve direction where both speed and acceleration were higher on right than left curves. The implication of this study is that curve direction should be taken into consideration to a greater extent when designing and redesigning curves. It appears that the driver and the vehicle are influenced by different infrastructure factors depending on the curve direction. In addition, the results suggest that the vehicle dynamics response alone cannot explain the higher crash risk in right curves. Further studies of the links between driver, vehicle, and highway characteristics are needed, such as naturalistic driving studies, to identify the key safety indicators for highway safety.

  20. Joint inversion of apparent resistivity and seismic surface and body wave data

    NASA Astrophysics Data System (ADS)

    Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle

    2013-04-01

    A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.

  1. SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Yang, K

    2016-06-15

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  2. SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Wu, D

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  3. Spline curve matching with sparse knot sets: applications to deformable shape detection and recognition

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2003-01-01

    Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...

  4. TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, E; Phillips, M; Bojechko, C

    Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less

  5. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795

  6. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  7. Impacts of uncertainties in weather and streamflow observations in calibration and evaluation of an elevation distributed HBV-model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.

    2012-04-01

    The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.

  8. Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models

    NASA Astrophysics Data System (ADS)

    Southworth, John

    2010-11-01

    I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.

  9. A spectral filter for ESMR's sidelobe errors

    NASA Technical Reports Server (NTRS)

    Chesters, D.

    1979-01-01

    Fourier analysis was used to remove periodic errors from a series of NIMBUS-5 electronically scanned microwave radiometer brightness temperatures. The observations were all taken from the midnight orbits over fixed sites in the Australian grasslands. The angular dependence of the data indicates calibration errors consisted of broad sidelobes and some miscalibration as a function of beam position. Even though an angular recalibration curve cannot be derived from the available data, the systematic errors can be removed with a spectral filter. The 7 day cycle in the drift of the orbit of NIMBUS-5, coupled to the look-angle biases, produces an error pattern with peaks in its power spectrum at the weekly harmonics. About plus or minus 4 K of error is removed by simply blocking the variations near two- and three-cycles-per-week.

  10. Grinding Method and Error Analysis of Eccentric Shaft Parts

    NASA Astrophysics Data System (ADS)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  11. Hydrological regionalisation based on available hydrological information for runoff prediction at catchment scale

    NASA Astrophysics Data System (ADS)

    Li, Qiaoling; Li, Zhijia; Zhu, Yuelong; Deng, Yuanqian; Zhang, Ke; Yao, Cheng

    2018-06-01

    Regionalisation provides a way of transferring hydrological information from gauged to ungauged catchments. The past few decades has seen several kinds of regionalisation approaches for catchment classification and runoff predictions. The underlying assumption is that catchments having similar catchment properties are hydrological similar. This requires the appropriate selection of catchment properties, particularly the inclusion of observed hydrological information, to explain the similarity of hydrological behaviour. We selected observable catchments properties and flow duration curves to reflect the hydrological behaviour, and to regionalize rainfall-runoff response for runoff prediction. As a case study, we investigated 15 catchments located in the Yangtze and Yellow River under multiple hydro-climatic conditions. A clustering scheme was developed to separate the catchments into 4 homogeneous regions by employing catchment properties including hydro-climatic attributes, topographic attributes and land cover etc. We utilized daily flow duration curves as the indicator of hydrological response and interpreted hydrological similarity by root mean square errors. The combined analysis of similarity in catchment properties and hydrological response suggested that catchments in the same homogenous region were hydrological similar. A further validation was conducted by establishing a rainfall-runoff coaxial correlation diagram for each catchment. A common coaxial correlation diagram was generated for each homogenous region. The performances of most coaxial correlation diagrams met the national standard. The coaxial correlation diagram can be transferred within the homogeneous region for runoff prediction in ungauged catchments at an hourly time scale.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaczmarski, Krzysztof; Guiochon, Georges A

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less

  13. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    PubMed

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Type Ia Supernova Light Curve Inference: Hierarchical Models for Nearby SN Ia in the Optical and Near Infrared

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.

    2010-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.

  15. Preliminary results for RR Lyrae stars and Classical Cepheids from the Vista Magellanic Cloud (VMC) survey

    NASA Astrophysics Data System (ADS)

    Ripepi, V.; Moretti, M. I.; Clementini, G.; Marconi, M.; Cioni, M. R.; Marquette, J. B.; Tisserand, P.

    2012-09-01

    The Vista Magellanic Cloud (VMC, PI M.R. Cioni) survey is collecting K S -band time series photometry of the system formed by the two Magellanic Clouds (MC) and the "bridge" that connects them. These data are used to build K S -band light curves of the MC RR Lyrae stars and Classical Cepheids and determine absolute distances and the 3D geometry of the whole system using the K-band period luminosity ( PLK S ), the period-luminosity-color ( PLC) and the Wesenhiet relations applicable to these types of variables. As an example of the survey potential we present results from the VMC observations of two fields centered respectively on the South Ecliptic Pole and the 30 Doradus star forming region of the Large Magellanic Cloud. The VMC K S -band light curves of the RR Lyrae stars in these two regions have very good photometric quality with typical errors for the individual data points in the range of ˜0.02 to 0.05 mag. The Cepheids have excellent light curves (typical errors of ˜0.01 mag). The average K S magnitudes derived for both types of variables were used to derive PLK S relations that are in general good agreement within the errors with the literature data, and show a smaller scatter than previous studies.

  16. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han's model for rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Milani, G.; Milani, F.

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  17. Nonmonotonic Dose-Response Curves and Endocrine-Disrupting Chemicals: Fact or Falderal?**

    EPA Science Inventory

    Nonmonotonic Dose-Response Curves and Endocrine-Disrupting Chemicals: Fact or Falderal? The shape of the dose response curve in the low dose region has been debated since the 1940s, originally focusing on linear no threshold (LNT) versus threshold responses for cancer and noncanc...

  18. Uncertainty Analysis Principles and Methods

    DTIC Science & Technology

    2007-09-01

    error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden

  19. Generalization and refinement of an automatic landing system capable of curved trajectories

    NASA Technical Reports Server (NTRS)

    Sherman, W. L.

    1976-01-01

    Refinements in the lateral and longitudinal guidance for an automatic landing system capable of curved trajectories were studied. Wing flaps or drag flaps (speed brakes) were found to provide faster and more precise speed control than autothrottles. In the case of the lateral control it is shown that the use of the integral of the roll error in the roll command over the first 30 to 40 seconds of flight reduces the sensitivity of the lateral guidance to the gain on the azimuth guidance angle error in the roll command. Also, changes to the guidance algorithm are given that permit pi-radian approaches and constrain the airplane to fly in a specified plane defined by the position of the airplane at the start of letdown and the flare point.

  20. Analysis of Learning Curve Fitting Techniques.

    DTIC Science & Technology

    1987-09-01

    1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied

  1. Gains in accuracy from averaging ratings of abnormality

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.

    1999-05-01

    Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.

  2. Fixed-node diffusion Monte Carlo potential energy curve of the fluorine molecule F{sub 2} using selected configuration interaction trial wavefunctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giner, Emmanuel; Scemama, Anthony; Caffarel, Michel

    2015-01-28

    The potential energy curve of the F{sub 2} molecule is calculated with Fixed-Node Diffusion Monte Carlo (FN-DMC) using Configuration Interaction (CI)-type trial wavefunctions. To keep the number of determinants reasonable and thus make FN-DMC calculations feasible in practice, the CI expansion is restricted to those determinants that contribute the most to the total energy. The selection of the determinants is made using the CIPSI approach (Configuration Interaction using a Perturbative Selection made Iteratively). The trial wavefunction used in FN-DMC is directly issued from the deterministic CI program; no Jastrow factor is used and no preliminary multi-parameter stochastic optimization of themore » trial wavefunction is performed. The nodes of CIPSI wavefunctions are found to reduce significantly the fixed-node error and to be systematically improved upon increasing the number of selected determinants. To reduce the non-parallelism error of the potential energy curve, a scheme based on the use of a R-dependent number of determinants is introduced. Using Dunning’s cc-pVDZ basis set, the FN-DMC energy curve of F{sub 2} is found to be of a quality similar to that obtained with full configuration interaction/cc-pVQZ.« less

  3. Robust prediction of three-dimensional spinal curve from back surface for non-invasive follow-up of scoliosis

    NASA Astrophysics Data System (ADS)

    Bergeron, Charles; Labelle, Hubert; Ronsky, Janet; Zernicke, Ronald

    2005-04-01

    Spinal curvature progression in scoliosis patients is monitored from X-rays, and this serial exposure to harmful radiation increases the incidence of developing cancer. With the aim of reducing the invasiveness of follow-up, this study seeks to relate the three-dimensional external surface to the internal geometry, having assumed that that the physiological links between these are sufficiently regular across patients. A database was used of 194 quasi-simultaneous acquisitions of two X-rays and a 3D laser scan of the entire trunk. Data was processed to sets of datapoints representing the trunk surface and spinal curve. Functional data analyses were performed using generalized Fourier series using a Haar basis and functional minimum noise fractions. The resulting coefficients became inputs and outputs, respectively, to an array of support vector regression (SVR) machines. SVR parameters were set based on theoretical results, and cross-validation increased confidence in the system's performance. Predicted lateral and frontal views of the spinal curve from the back surface demonstrated average L2-errors of 6.13 and 4.38 millimetres, respectively, across the test set; these compared favourably with measurement error in data. This constitutes a first robust prediction of the 3D spinal curve from external data using learning techniques.

  4. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  5. Toward a more sophisticated response representation in theories of medial frontal performance monitoring: The effects of motor similarity and motor asymmetries.

    PubMed

    Hochman, Eldad Yitzhak; Orr, Joseph M; Gehring, William J

    2014-02-01

    Cognitive control in the posterior medial frontal cortex (pMFC) is formulated in models that emphasize adaptive behavior driven by a computation evaluating the degree of difference between 2 conflicting responses. These functions are manifested by an event-related brain potential component coined the error-related negativity (ERN). We hypothesized that the ERN represents a regulative rather than evaluative pMFC process, exerted over the error motor representation, expediting the execution of a corrective response. We manipulated the motor representations of the error and the correct response to varying degrees. The ERN was greater when 1) the error response was more potent than when the correct response was more potent, 2) more errors were committed, 3) fewer and slower corrections were observed, and 4) the error response shared fewer motor features with the correct response. In their current forms, several prominent models of the pMFC cannot be reconciled with these findings. We suggest that a prepotent, unintended error is prone to reach the manual motor processor responsible for response execution before a nonpotent, intended correct response. In this case, the correct response is a correction and its execution must wait until the error is aborted. The ERN may reflect pMFC activity that aimed to suppress the error.

  6. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris

    2016-04-21

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  7. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris

    2016-04-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  8. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  9. A curved edge diffraction-utilized displacement sensor for spindle metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, ChaBum, E-mail: clee@tntech.edu; Zhao, Rui; Jeon, Seongkyul

    This paper presents a new dimensional metrological sensing principle for a curved surface based on curved edge diffraction. Spindle error measurement technology utilizes a cylindrical or spherical target artifact attached to the spindle with non-contact sensors, typically a capacitive sensor (CS) or an eddy current sensor, pointed at the artifact. However, these sensors are designed for flat surface measurement. Therefore, measuring a target with a curved surface causes error. This is due to electric fields behaving differently between a flat and curved surface than between two flat surfaces. In this study, a laser is positioned incident to the cylindrical surfacemore » of the spindle, and a photodetector collects the total field produced by the diffraction around the target surface. The proposed sensor was compared with a CS within a range of 500 μm. The discrepancy between the proposed sensor and CS was 0.017% of the full range. Its sensing performance showed a resolution of 14 nm and a drift of less than 10 nm for 7 min of operation. This sensor was also used to measure dynamic characteristics of the spindle system (natural frequency 181.8 Hz, damping ratio 0.042) and spindle runout (22.0 μm at 2000 rpm). The combined standard uncertainty was estimated as 85.9 nm under current experiment conditions. It is anticipated that this measurement technique allows for in situ health monitoring of a precision spindle system in an accurate, convenient, and low cost manner.« less

  10. A brain-machine interface to navigate a mobile robot in a planar workspace: enabling humans to fly simulated aircraft with EEG.

    PubMed

    Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy

    2013-03-01

    This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.

  11. Presearch data conditioning in the Kepler Science Operations Center pipeline

    NASA Astrophysics Data System (ADS)

    Twicken, Joseph D.; Chandrasekaran, Hema; Jenkins, Jon M.; Gunter, Jay P.; Girouard, Forrest; Klaus, Todd C.

    2010-07-01

    We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this component are to correct systematic and other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets. Long cadence corrected flux light curves are subjected to a transiting planet search in a subsequent pipeline module. We discuss science algorithms for long and short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities; systematic error correction; and removal of excess flux due to aperture crowding. We discuss the propagation of uncertainties from raw to corrected flux. Finally, we present examples from Kepler flight data to illustrate PDC performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  12. Visual navigation using edge curve matching for pinpoint planetary landing

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  13. Presearch Data Conditioning in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Twicken, Joseph D.; Chandrasekaran, Hema; Jenkins, Jon M.; Gunter, Jay P.; Girouard, Forrest; Klaus, Todd C.

    2010-01-01

    We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this component are to correct systematic and other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) targets across the focal plane array. Long cadence corrected flux light curves are subjected to a transiting planet search in a subsequent pipeline module. We discuss the science algorithms for long and short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities; systematic error correction; and excess flux removal. We discuss the propagation of uncertainties from raw to corrected flux. Finally, we present examples of raw and corrected flux time series for flight data to illustrate PDC performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and will be made available to the general public in accordance with the NASA/Kepler data release policy.

  14. Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor

    NASA Technical Reports Server (NTRS)

    Armistead, K. H.; Webb, L. D.

    1973-01-01

    Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.

  15. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  16. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  17. Errors and conflict at the task level and the response level.

    PubMed

    Desmet, Charlotte; Fias, Wim; Hartstra, Egbert; Brass, Marcel

    2011-01-26

    In the last decade, research on error and conflict processing has become one of the most influential research areas in the domain of cognitive control. There is now converging evidence that a specific part of the posterior frontomedian cortex (pFMC), the rostral cingulate zone (RCZ), is crucially involved in the processing of errors and conflict. However, error-related research has focused primarily on a specific error type, namely, response errors. The aim of the present study was to investigate whether errors on the task level rely on the same neural and functional mechanisms. Here we report a dissociation of both error types in the pFMC: whereas response errors activate the RCZ, task errors activate the dorsal frontomedian cortex. Although this last region shows an overlap in activation for task and response errors on the group level, a closer inspection of the single-subject data is more in accordance with a functional anatomical dissociation. When investigating brain areas related to conflict on the task and response levels, a clear dissociation was perceived between areas associated with response conflict and with task conflict. Overall, our data support a dissociation between response and task levels of processing in the pFMC. In addition, we provide additional evidence for a dissociation between conflict and errors both at the response level and at the task level.

  18. [Not Available].

    PubMed

    Bernard, A M; Burgot, J L

    1981-12-01

    The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.

  19. Design of a microbial contamination detector and analysis of error sources in its optical path.

    PubMed

    Zhang, Chao; Yu, Xiang; Liu, Xingju; Zhang, Lei

    2014-05-01

    Microbial contamination is a growing concern in the food safety today. To effectively control the types and degree of microbial contamination during food production, this paper introduces a design for a microbial contamination detector that can be used for quick in-situ examination. The designed detector can identify the category of microbial contamination by locating its characteristic absorption peak and then can calculate the concentration of the microbial contamination by fitting the absorbance vs. concentration lines of standard samples with gradient concentrations. Based on traditional scanning grating detection system, this design improves the light splitting unit to expand the scanning range and enhance the accuracy of output wavelength. The motor rotation angle φ is designed to have a linear relationship with the output wavelength angle λ, which simplifies the conversion of output spectral curves into wavelength vs. light intensity curves. In this study, we also derive the relationship between the device's major sources of errors and cumulative error of the output wavelengths, and suggest a simple correction for these errors. The proposed design was applied to test pigments and volatile basic nitrogen (VBN) which evaluated microbial contamination degrees of meats, and the deviations between the measured values and the pre-set values were only in a low range of 1.15% - 1.27%.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Deyu

    A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less

  1. Evaluation of modulation transfer function of optical lens system by support vector regression methodologies - A comparative study

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.

  2. Development of property-transfer models for estimating the hydraulic properties of deep sediments at the Idaho National Engineering and Environmental Laboratory, Idaho

    USGS Publications Warehouse

    Winfield, Kari A.

    2005-01-01

    Because characterizing the unsaturated hydraulic properties of sediments over large areas or depths is costly and time consuming, development of models that predict these properties from more easily measured bulk-physical properties is desirable. At the Idaho National Engineering and Environmental Laboratory, the unsaturated zone is composed of thick basalt flow sequences interbedded with thinner sedimentary layers. Determining the unsaturated hydraulic properties of sedimentary layers is one step in understanding water flow and solute transport processes through this complex unsaturated system. Multiple linear regression was used to construct simple property-transfer models for estimating the water-retention curve and saturated hydraulic conductivity of deep sediments at the Idaho National Engineering and Environmental Laboratory. The regression models were developed from 109 core sample subsets with laboratory measurements of hydraulic and bulk-physical properties. The core samples were collected at depths of 9 to 175 meters at two facilities within the southwestern portion of the Idaho National Engineering and Environmental Laboratory-the Radioactive Waste Management Complex, and the Vadose Zone Research Park southwest of the Idaho Nuclear Technology and Engineering Center. Four regression models were developed using bulk-physical property measurements (bulk density, particle density, and particle size) as the potential explanatory variables. Three representations of the particle-size distribution were compared: (1) textural-class percentages (gravel, sand, silt, and clay), (2) geometric statistics (mean and standard deviation), and (3) graphical statistics (median and uniformity coefficient). The four response variables, estimated from linear combinations of the bulk-physical properties, included saturated hydraulic conductivity and three parameters that define the water-retention curve. For each core sample,values of each water-retention parameter were estimated from the appropriate regression equation and used to calculate an estimated water-retention curve. The degree to which the estimated curve approximated the measured curve was quantified using a goodness-of-fit indicator, the root-mean-square error. Comparison of the root-mean-square-error distributions for each alternative particle-size model showed that the estimated water-retention curves were insensitive to the way the particle-size distribution was represented. Bulk density, the median particle diameter, and the uniformity coefficient were chosen as input parameters for the final models. The property-transfer models developed in this study allow easy determination of hydraulic properties without need for their direct measurement. Additionally, the models provide the basis for development of theoretical models that rely on physical relationships between the pore-size distribution and the bulk-physical properties of the media. With this adaptation, the property-transfer models should have greater application throughout the Idaho National Engineering and Environmental Laboratory and other geographic locations.

  3. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  4. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  5. Assessing the learning curve for the acquisition of laparoscopic skills on a virtual reality simulator.

    PubMed

    Sherman, V; Feldman, L S; Stanbridge, D; Kazmi, R; Fried, G M

    2005-05-01

    The aim of this study was to develop summary metrics and assess the construct validity for a virtual reality laparoscopic simulator (LapSim) by comparing the learning curves of three groups with different levels of laparoscopic expertise. Three groups of subjects ('expert', 'junior', and 'naïve') underwent repeated trials on three LapSim tasks. Formulas were developed to calculate scores for efficiency ('time-error') and economy of 'motion' ('motion') using metrics generated by the software after each drill. Data (mean +/- SD) were evaluated by analysis of variance (ANOVA). Significance was set at p < 0.05. All three groups improved significantly from baseline to final for both 'time-error' and 'motion' scores. There were significant differences between groups in time error performances at baseline and final, due to higher scores in the 'expert' group. A significant difference in 'motion' scores was seen only at baseline. We have developed summary metrics for the LapSim that differentiate among levels of laparoscopic experience. This study also provides evidence of construct validity for the LapSim.

  6. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  7. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  8. Experimental Modeling of a Formula Student Carbon Composite Nose Cone

    PubMed Central

    Fellows, Neil A.

    2017-01-01

    A numerical impact study is presented on a Formula Student (FS) racing car carbon composite nose cone. The effect of material model and model parameter selection on the numerical deceleration curves is discussed in light of the experimental deceleration data. The models show reasonable correlation in terms of the shape of the deceleration-displacement curves but do not match the peak deceleration values with errors greater that 30%. PMID:28772982

  9. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  10. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  11. Hierarchical Models for Type Ia Supernova Light Curves in the Optical and Near Infrared

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey; Narayan, G.; Kirshner, R. P.

    2011-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova optical and near infrared light curves. Since the near infrared light curves are excellent standard candles and are less sensitive to dust extinction and reddening, the combination of near infrared and optical data better constrains the host galaxy extinction and improves the precision of distance predictions to SN Ia. A hierarchical probabilistic model coherently accounts for multiple random and uncertain effects, including photometric error, intrinsic supernova light curve variations and correlations across phase and wavelength, dust extinction and reddening, peculiar velocity dispersion and distances. An improved BayeSN MCMC code is implemented for computing probabilistic inferences for individual supernovae and the SN Ia and host galaxy dust populations. I use this hierarchical model to analyze nearby Type Ia supernovae with optical and near infared data from the PAIRITEL, CfA3, and CSP samples and the literature. Using cross-validation to test the robustness of the model predictions, I find that the rms Hubble diagram scatter of predicted distance moduli is 0.11 mag for SN with optical and near infrared data versus 0.15 mag for SN with only optical data. Accounting for the dispersion expected from random peculiar velocities, the rms intrinsic prediction error is 0.08-0.10 mag for SN with both optical and near infrared light curves. I discuss results for the inferred intrinsic correlation structures of the optical-NIR SN Ia light curves and the host galaxy dust distribution captured by the hierarchical model. The continued observation and analysis of Type Ia SN in the optical and near infrared is important for improving their utility as precise and accurate cosmological distance indicators.

  12. A method for optical ground station reduce alignment error in satellite-ground quantum experiments

    NASA Astrophysics Data System (ADS)

    He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei

    2018-03-01

    A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.

  13. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  14. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  15. Relations between Response Trajectories on the Continuous Performance Test and Teacher-Rated Problem Behaviors in Preschoolers

    PubMed Central

    Allan, Darcey M.; Lonigan, Christopher J.

    2014-01-01

    Although both the Continuous Performance Test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (Mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An ADHD-rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across four temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to one type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. PMID:25419645

  16. Relations between response trajectories on the continuous performance test and teacher-rated problem behaviors in preschoolers.

    PubMed

    Allan, Darcey M; Lonigan, Christopher J

    2015-06-01

    Although both the continuous performance test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An attention deficit/hyperactivity disorder (ADHD) rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across 4 temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to 1 type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. (c) 2015 APA, all rights reserved).

  17. Stage-Discharge Relations for the Colorado River in Glen, Marble, and Grand Canyons, Arizona, 1990-2005

    USGS Publications Warehouse

    Hazel, Joseph E.; Kaplinski, Matt; Parnell, Rod; Kohl, Keith; Topping, David J.

    2007-01-01

    This report presents stage-discharge relations for 47 discrete locations along the Colorado River, downstream from Glen Canyon Dam. Predicting the river stage that results from changes in flow regime is important for many studies investigating the effects of dam operations on resources in and along the Colorado River. The empirically based stage-discharge relations were developed from water-surface elevation data surveyed at known discharges at all 47 locations. The rating curves accurately predict stage at each location for discharges between 141 cubic meters per second and 1,274 cubic meters per second. The coefficient of determination (R2) of the fit to the data ranged from 0.993 to 1.00. Given the various contributing errors to the method, a conservative error estimate of ?0.05 m was assigned to the rating curves.

  18. Fitting Photometry of Blended Microlensing Events

    NASA Astrophysics Data System (ADS)

    Thomas, Christian L.; Griest, Kim

    2006-03-01

    We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.

  19. Corner smoothing of 2D milling toolpath using b-spline curve by optimizing the contour error and the feedrate

    NASA Astrophysics Data System (ADS)

    Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François

    2018-05-01

    In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.

  20. Spectral Characterizations of the Clouds and the Earth's Radiant Energy System (CERES) Thermistor Bolometers using Fourier Transform Spectrometer (FTS) Techniques

    NASA Technical Reports Server (NTRS)

    Thornhill, K. Lee; Bitting, Herbert; Lee, Robert B., III; Paden, Jack; Pandey, Dhirendra K.; Priestley, Kory J.; Thomas, Susan; Wilson, Robert S.

    1998-01-01

    Fourier Transform Spectrometer (FTS) techniques are being used to characterize the relative spectral response, or sensitivity, of scanning thermistor bolometers in the infrared (IR) region (2 - >= 100-micrometers). The bolometers are being used in the Clouds and the Earth's Radiant Energy System (CERES) program. The CERES measurements are designed to provide precise, long term monitoring of the Earth's atmospheric radiation energy budget. The CERES instrument houses three bolometric radiometers, a total wavelength (0.3- >= 150-micrometers) sensor, a shortwave (0.3-5-micrometers) sensor, and an atmospheric window (8-12-micrometers) sensor. Accurate spectral characterization is necessary for determining filtered radiances for longwave radiometric calibrations. The CERES bolometers spectral response's are measured in the TRW FTS Vacuum Chamber Facility (FTS - VCF), which uses a FTS as the source and a cavity pyroelectric trap detector as the reference. The CERES bolometers and the cavity detector are contained in a vacuum chamber, while the FTS source is housed in a GN2 purged chamber. Due to the thermal time constant of the CERES bolometers, the FTS must be operated in a step mode. Data are acquired in 6 IR spectral bands covering the entire longwave IR region. In this paper, the TRW spectral calibration facility design and data measurement techniques are described. Two approaches are presented which convert the total channel FTS data into the final CERES spectral characterizations, producing the same calibration coefficients (within 0.1 percent). The resulting spectral response curves are shown, along with error sources in the two procedures. Finally, the impact of each spectral response curve on CERES data validation will be examined through analysis of filtered radiance values from various typical scene types.

  1. Measurement properties of the Dizziness Handicap Inventory by cross-sectional and longitudinal designs

    PubMed Central

    2009-01-01

    Background The impact of dizziness on quality of life is often assessed by the Dizziness Handicap Inventory (DHI), which is used as a discriminate and evaluative measure. The aim of the present study was to examine reliability and validity of a translated Norwegian version (DHI-N), also examining responsiveness to important change in the construct being measured. Methods Two samples (n = 92 and n = 27) included participants with dizziness of mainly vestibular origin. A cross-sectional design was used to examine the factor structure (exploratory factor analysis), internal consistency (Cronbach's α), concurrent validity (Pearson's product moment correlation r), and discriminate ability (ROC curve analysis). Longitudinal designs were used to examine test-retest reliability (intraclass correlation coefficient (ICC) statistics, smallest detectable difference (SDD)), and responsiveness (Pearson's product moment correlation, ROC curve analysis; area under the ROC curve (AUC), and minimally important change (MIC)). The DHI scores range from 0 to 100. Results Factor analysis revealed a different factor structure than the original DHI, resulting in dismissal of subscale scores in the DHI-N. Acceptable internal consistency was found for the total scale (α = 0.95). Concurrent correlations between the DHI-N and other related measures were moderate to high, highest with Vertigo Symptom Scale-short form-Norwegian version (r = 0.69), and lowest with preferred gait (r = - 0.36). The DHI-N demonstrated excellent ability to discriminate between participants with and without 'disability', AUC being 0.89 and best cut-off point = 29 points. Satisfactory test-retest reliability was demonstrated, and the change for an individual should be ≥ 20 DHI-N points to exceed measurement error (SDD). Correlations between change scores of DHI-N and other self-report measures of functional health and symptoms were high (r = 0.50 - 0.57). Responsiveness of the DHI-N was excellent, AUC = 0.83, discriminating between self-perceived 'improved' versus 'unchanged' participants. The MIC was identified as 11 DHI-N points. Conclusions The DHI-N total scale demonstrated satisfactory measurement properties. This is the first study that has addressed and demonstrated responsiveness to important change of the DHI, and provided values of SDD and MIC to help interpret change scores. PMID:20025754

  2. USE OF MECHANISTIC DATA TO HELP DEFINE DOSE-RESPONSE CURVES

    EPA Science Inventory

    Use of Mechanistic Data to Help Define Dose-Response Curves

    The cancer risk assessment process described by the U.S. EPA necessitates a description of the dose-response curve for tumors in humans at low (environmental) exposures. This description can either be a default l...

  3. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  4. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  6. Masking technique for coating thickness control on large and strongly curved aspherical optics.

    PubMed

    Sassolas, B; Flaminio, R; Franc, J; Michel, C; Montorio, J-L; Morgado, N; Pinard, L

    2009-07-01

    We discuss a method to control the coating thickness deposited onto large and strongly curved optics by ion beam sputtering. The technique uses an original design of the mask used to screen part of the sputtered materials. A first multielement mask is calculated from the measured two-dimensional coating thickness distribution. Then, by means of an iterative process, the final mask is designed. By using such a technique, it has been possible to deposit layers of tantalum pentoxide having a high thickness gradient onto a curved substrate 500 mm in diameter. Residual errors in the coating thickness profile are below 0.7%.

  7. Development of Physics-Based Hurricane Wave Response Functions: Application to Selected Sites on the U.S. Gulf Coast

    NASA Astrophysics Data System (ADS)

    McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.

    2013-12-01

    Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.

  8. Visually Evoked Potential Markers of Concussion History in Patients with Convergence Insufficiency

    PubMed Central

    Poltavski, Dmitri; Lederer, Paul; Cox, Laurie Kopko

    2017-01-01

    ABSTRACT Purpose We investigated whether differences in the pattern visual evoked potentials exist between patients with convergence insufficiency and those with convergence insufficiency and a history of concussion using stimuli designed to differentiate between magnocellular (transient) and parvocellular (sustained) neural pathways. Methods Sustained stimuli included 2-rev/s, 85% contrast checkerboard patterns of 1- and 2-degree check sizes, whereas transient stimuli comprised 4-rev/s, 10% contrast vertical sinusoidal gratings with column width of 0.25 and 0.50 cycles/degree. We tested two models: an a priori clinical model based on an assumption of at least a minimal (beyond instrumentation’s margin of error) 2-millisecond lag of transient response latencies behind sustained response latencies in concussed patients and a statistical model derived from the sample data. Results Both models discriminated between concussed and nonconcussed groups significantly above chance (with 76% and 86% accuracy, respectively). In the statistical model, patients with mean vertical sinusoidal grating response latencies greater than 119 milliseconds to 0.25-cycle/degree stimuli (or mean vertical sinusoidal latencies >113 milliseconds to 0.50-cycle/degree stimuli) and mean vertical sinusoidal grating amplitudes of less than 14.75 mV to 0.50-cycle/degree stimuli were classified as having had a history of concussion. The resultant receiver operating characteristic curve for this model had excellent discrimination between the concussed and nonconcussed (area under the curve = 0.857; P < .01) groups with sensitivity of 0.92 and specificity of 0.80. Conclusions The results suggest a promising electrophysiological approach to identifying individuals with convergence insufficiency and a history of concussion. PMID:28609417

  9. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  10. Electromagnetic Induction Spectroscopy for the Detection of Subsurface Targets

    DTIC Science & Technology

    2012-12-01

    curves of the proposed method and that of Fails et al.. For the kNN ROC curve, k = 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81...et al. [6] and Ramachandran et al. [7] both demonstrated success in detecting mines using the k-nearest-neighbor ( kNN ) algorithm based on the EMI...error is also included in the feature vector. The kNN labels an unknown target based on the closest targets in a training set. Collins et al. [2] and

  11. The insertion torque-depth curve integral as a measure of implant primary stability: An in vitro study on polyurethane foam blocks.

    PubMed

    Di Stefano, Danilo Alessio; Arosio, Paolo; Gastaldi, Giorgio; Gherlone, Enrico

    2017-07-08

    Recent research has shown that dynamic parameters correlate with insertion energy-that is, the total work needed to place an implant into its site-might convey more reliable information concerning immediate implant primary stability at insertion than the commonly used insertion torque (IT), the reverse torque (RT), or the implant stability quotient (ISQ). Yet knowledge on these dynamic parameters is still limited. The purpose of this in vitro study was to evaluate whether an energy-related parameter, the torque-depth curve integral (I), could be a reliable measure of primary stability. This was done by assessing if (I) measurement was operator-independent, by investigating its correlation with other known primary stability parameters (IT, RT, or ISQ) by quantifying the (I) average error and correlating (I), IT, RT, and ISQ variations with bone density. Five operators placed 200 implants in polyurethane foam blocks of different densities using a micromotor that calculated the (I) during implant placement. Primary implant stability was assessed by measuring the ISQ, IT, and RT. ANOVA tests were used to evaluate whether measurements were operator independent (P>.05 in all cases). A correlation analysis was performed between (I) and IT, ISQ, and RT. The (I) average error was calculated and compared with that of the other parameters by ANOVA. (I)-density, IT-density, ISQ-density, and RT-density plots were drawn, and their slopes were compared by ANCOVA. The (I) measurements were operator independent and correlated with IT, ISQ, and RT. The average error of these parameters was not significantly different (P>.05 in all cases). The (I)-density, IT-density, ISQ-density, and RT-density curves were linear in the 0.16 to 0.49 g/cm³ range, with the (I)-density curves having a significantly greater slope than those regarding the other parameters (P≤.001 in all cases). The torque-depth curve integral (I) provides a reliable assessment of primary stability and shows a greater sensitivity to density variations than other known primary stability parameters. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  12. Measurement of Residual Flexibility for Substructures Having Prominent Flexible Interfaces

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Bookout, Paul S.

    1994-01-01

    Verification of a dynamic model of a constrained structure requires a modal survey test of the physical structure and subsequent modification of the model to obtain the best agreement possible with test data. Constrained-boundary or fixed-base testing has historically been the most common approach for verifying constrained mathematical models, since the boundary conditions of the test article are designed to match the actual constraints in service. However, there are difficulties involved with fixed-base testing, in some cases making the approach impractical. It is not possible to conduct a truly fixed-base test due to coupling between the test article and the fixture. In addition, it is often difficult to accurately simulate the actual boundary constraints, and the cost of designing and constructing the fixture may be prohibitive. For use when fixed-base testing proves impractical or undesirable, alternate free-boundary test methods have been investigated, including the residual flexibility technique. The residual flexibility approach has been treated analytically in considerable detail and has had limited frequency response measurements for the method. This concern is well-justified for a number of reasons. First, residual flexibilities are very small numbers, typically on the order of 1.0E-6 in/lb for translational diagonal terms, and orders of magnitude smaller for off-diagonal values. This poses difficulty in obtaining accurate and noise-free measurements, especially for points removed from the excitation source. A second difficulty encountered in residual measurements lies in obtaining a clean residual function in the process of subtracting synthesized modal data from a measured response function. Inaccuracies occur since modes are not subtracted exactly, but only to the accuracy of the curve fits for each mode; these errors are compounded with increasing distance from the excitation point. In this paper, the residual flexibility method is applied to a simple structure in both test and analysis. Measured and predicted residual functions are compared, and regions of poor data in the measured curves are described. It is found that for accurate residual measurements, frequency response functions having prominent stiffness lines in the acceleration/force format are needed. The lack of such stiffness lines increases measurement errors. Interface drive point frequency respose functions for shuttle orbiter payloads exhibit dominant stiffness lines, making the residual test approach a good candidate for payload modal tests when constrained tests are inappropriate. Difficulties in extracting a residual flexibility value from noisy test data are discussed. It is shown that use of a weighted second order least-squares curve fit of the measured residual function allows identification of residual flexibility that compares very well with predictions for the simple structure. This approach also provides an estimate of second order residual mass effects.

  13. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  14. A Simple Experiment Demonstrating the Relationship between Response Curves and Absorption Spectra.

    ERIC Educational Resources Information Center

    Li, Chia-yu

    1984-01-01

    Describes an experiment for recording two individual spectrophotometer response curves. The two curves are directly related to the power of transmitted beams that pass through a solvent and solution. An absorption spectrum of the solution can be constructed from the calculated rations of the curves as a function of wavelength. (JN)

  15. Modeling and regression analysis of semiochemical dose-response curves of insect antennal reception and behavior

    USDA-ARS?s Scientific Manuscript database

    Dose-response curves with semiochemicals are reported in many articles in insect chemical ecology regarding neurophysiology and behavioral bioassays. Most such curves are shown in figures where the x-axis has order of magnitude increases in dosages versus responses on the y-axis represented by point...

  16. Nonmonotonic dose response curves (NMDRCs) are common after Estrogen or Androgen signaling pathway disruption. Fact or Falderal?##

    EPA Science Inventory

    Nonmonotonic dose response curves (NMDRCs) are common after Estrogen or Androgen signaling pathway disruption. Fact or Falderal? Leon Earl Gray Jr, USEPA, ORD, NHEERL, TAD, RTB. RTP, NC, USA The shape of the dose response curve in the low dose region has been debated since th...

  17. Analysis of a range estimator which uses MLS angle measurements

    NASA Technical Reports Server (NTRS)

    Downing, David R.; Linse, Dennis

    1987-01-01

    A concept that uses the azimuth signal from a microwave landing system (MLS) combined with onboard airspeed and heading data to estimate the horizontal range to the runway threshold is investigated. The absolute range error is evaluated for trajectories typical of General Aviation (GA) and commercial airline operations (CAO). These include constant intercept angles for GA and CAO, and complex curved trajectories for CAO. It is found that range errors of 4000 to 6000 feet at the entry of MLS coverage which then reduce to 1000-foot errors at runway centerline intercept are possible for GA operations. For CAO, errors at entry into MLS coverage of 2000 feet which reduce to 300 feet at runway centerline interception are possible.

  18. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  19. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  20. 1996-2007 Interannual Spatio-Temporal Variability in Snowmelt in Two Montane Watersheds

    NASA Astrophysics Data System (ADS)

    Jepsen, S. M.; Molotch, N. P.; Rittger, K. E.

    2009-12-01

    Snowmelt is a primary water source for ecosystems within, and urban/agricultural centers near, mountain regions. Stream chemistry from montane catchments is controlled by the flowpaths of water from snowmelt and the timing and duration of snow coverage. A process level understanding of the variability in these processes requires an understanding of the effect of changing climate and anthropogenic loading on spatio-temporal snowmelt patterns. With this as our objective, we are applying a snow reconstruction model to two well-studied montane watersheds, Tokopah Basin (TOK), California and Green Lakes Valley (GLV), Colorado, to examine interannual variability in the timing and location of snowmelt in response to variable climate conditions during the period from 1996 to 2007. The reconstruction model back solves for snowmelt by combining surface energy fluxes, inferred from meteorological data, with sequences of melt season snow images derived from satellite data (i.e., snowmelt depletion curves). Preliminary model results for 2002 were tested against measured snow water equivalent (SWE) and hydrograph data for the two watersheds. The computed maximum SWE averaged over TOK and GLV were 94 cm (~+17% error) and 50.2 cm (~+1% error), respectively. We present an analysis of interannual variability in these errors, in addition to reconstructed snowmelt maps over different land cover types under changing climate conditions between 1996-2007, focusing on the variability with interannual variation in climate.

  1. Non-rigid point set registration of curves: registration of the superficial vessel centerlines of the brain

    NASA Astrophysics Data System (ADS)

    Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan

    2016-03-01

    In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.

  2. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  3. Response surface models for effects of temperature and previous growth sodium chloride on growth kinetics of Salmonella typhimurium on cooked chicken breast.

    PubMed

    Oscar, T P

    1999-12-01

    Response surface models were developed and validated for effects of temperature (10 to 40 degrees C) and previous growth NaCl (0.5 to 4.5%) on lag time (lambda) and specific growth rate (mu) of Salmonella Typhimurium on cooked chicken breast. Growth curves for model development (n = 55) and model validation (n = 16) were fit to a two-phase linear growth model to obtain lambda and mu of Salmonella Typhimurium on cooked chicken breast. Response surface models for natural logarithm transformations of lambda and mu as a function of temperature and previous growth NaCl were obtained by regression analysis. Both lambda and mu of Salmonella Typhimurium were affected (P < 0.0001) by temperature but not by previous growth NaCl. Models were validated against data not used in their development. Mean absolute relative error of predictions (model accuracy) was 26.6% for lambda and 15.4% for mu. Median relative error of predictions (model bias) was 0.9% for lambda and 5.2% for mu. Results indicated that the models developed provided reliable predictions of lambda and mu of Salmonella Typhimurium on cooked chicken breast within the matrix of conditions modeled. In addition, results indicated that previous growth NaCl (0.5 to 4.5%) was not a major factor affecting subsequent growth kinetics of Salmonella Typhimurium on cooked chicken breast. Thus, inclusion of previous growth NaCl in predictive models may not significantly improve our ability to predict growth of Salmonella spp. on food subjected to temperature abuse.

  4. The shape of the glucose response curve during an oral glucose tolerance test heralds biomarkers of type 2 diabetes risk in obese youth

    USDA-ARS?s Scientific Manuscript database

    The shape of the glucose response curve during an oral glucose tolerance test (OGTT), monophasic versus biphasic, identifies physiologically distinct groups of individuals with differences in insulin secretion and sensitivity. We aimed to verify the value of the OGTT-glucose response curve against m...

  5. An efficient RFID authentication protocol to enhance patient medication safety using elliptic curve cryptography.

    PubMed

    Zhang, Zezhong; Qi, Qingqing

    2014-05-01

    Medication errors are very dangerous even fatal since it could cause serious even fatal harm to patients. In order to reduce medication errors, automated patient medication systems using the Radio Frequency Identification (RFID) technology have been used in many hospitals. The data transmitted in those medication systems is very important and sensitive. In the past decade, many security protocols have been proposed to ensure its secure transition attracted wide attention. Due to providing mutual authentication between the medication server and the tag, the RFID authentication protocol is considered as the most important security protocols in those systems. In this paper, we propose a RFID authentication protocol to enhance patient medication safety using elliptic curve cryptography (ECC). The analysis shows the proposed protocol could overcome security weaknesses in previous protocols and has better performance. Therefore, the proposed protocol is very suitable for automated patient medication systems.

  6. Absolute Parameters for the F-type Eclipsing Binary BW Aquarii

    NASA Astrophysics Data System (ADS)

    Maxted, P. F. L.

    2018-05-01

    BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.

  7. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  8. Determination of effective complex refractive index of a turbid liquid with surface plasmon resonance phase detection.

    PubMed

    Yingying, Zhang; Jiancheng, Lai; Cheng, Yin; Zhenhua, Li

    2009-03-01

    The dependence of the surface plasmon resonance (SPR) phase difference curve on the complex refractive index of a sample in Kretschmann configuration is discussed comprehensively, based on which a new method is proposed to measure the complex refractive index of turbid liquid. A corresponding experiment setup was constructed to measure the SPR phase difference curve, and the complex refractive index of turbid liquid was determined. By using the setup, the complex refractive indices of Intralipid solutions with concentrations of 5%, 10%, 15%, and 20% are obtained to be 1.3377+0.0005 i, 1.3427+0.0028 i, 1.3476+0.0034 i, and 1.3496+0.0038 i, respectively. Furthermore, the error analysis indicates that the root-mean-square errors of both the real and the imaginary parts of the measured complex refractive index are less than 5x10(-5).

  9. A mathematical function for the description of nutrient-response curve

    PubMed Central

    Ahmadi, Hamed

    2017-01-01

    Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271

  10. SU-G-BRB-11: On the Sensitivity of An EPID-Based 3D Dose Verification System to Detect Delivery Errors in VMAT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B

    2016-06-15

    Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less

  11. Learning curves and impact of previous operative experience on performance on a virtual reality simulator to test laparoscopic surgical skills.

    PubMed

    Grantcharov, Teodor P; Bardram, Linda; Funch-Jensen, Peter; Rosenberg, Jacob

    2003-02-01

    The study was carried out to analyze the learning rate for laparoscopic skills on a virtual reality training system and to establish whether the simulator was able to differentiate between surgeons with different laparoscopic experience. Forty-one surgeons were divided into three groups according to their experience in laparoscopic surgery: masters (group 1, performed more than 100 cholecystectomies), intermediates (group 2, between 15 and 80 cholecystectomies), and beginners (group 3, fewer than 10 cholecystectomies) were included in the study. The participants were tested on the Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) 10 consecutive times within a 1-month period. Assessment of laparoscopic skills included time, errors, and economy of hand movement, measured by the simulator. The learning curves regarding time reached plateau after the second repetition for group 1, the fifth repetition for group 2, and the seventh repetition for group 3 (Friedman's tests P <0.05). Experienced surgeons did not improve their error or economy of movement scores (Friedman's tests, P >0.2) indicating the absence of a learning curve for these parameters. Group 2 error scores reached plateau after the first repetition, and group 3 after the fifth repetition. Group 2 improved their economy of movement score up to the third repetition and group 3 up to the sixth repetition (Friedman's tests, P <0.05). Experienced surgeons (group 1) demonstrated best performance parameters, followed by group 2 and group 3 (Mann-Whitney test P <0.05). Different learning curves existed for surgeons with different laparoscopic background. The familiarization rate on the simulator was proportional to the operative experience of the surgeons. Experienced surgeons demonstrated best laparoscopic performance on the simulator, followed by those with intermediate experience and the beginners. These differences indicate that the scoring system of MIST-VR is sensitive and specific to measuring skills relevant for laparoscopic surgery.

  12. Preliminary calibration of the ACP safeguards neutron counter

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.

    2007-10-01

    The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.

  13. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  14. Post-error response inhibition in high math-anxious individuals: Evidence from a multi-digit addition task.

    PubMed

    Núñez-Peña, M Isabel; Tubau, Elisabet; Suárez-Pellicioni, Macarena

    2017-06-01

    The aim of the study was to investigate how high math-anxious (HMA) individuals react to errors in an arithmetic task. Twenty HMA and 19 low math-anxious (LMA) individuals were presented with a multi-digit addition verification task and were given response feedback. Post-error adjustment measures (response time and accuracy) were analyzed in order to study differences between groups when faced with errors in an arithmetical task. Results showed that both HMA and LMA individuals were slower to respond following an error than following a correct answer. However, post-error accuracy effects emerged only for the HMA group, showing that they were also less accurate after having committed an error than after giving the right answer. Importantly, these differences were observed only when individuals needed to repeat the same response given in the previous trial. These results suggest that, for HMA individuals, errors caused reactive inhibition of the erroneous response, facilitating performance if the next problem required the alternative response but hampering it if the response was the same. This stronger reaction to errors could be a factor contributing to the difficulties that HMA individuals experience in learning math and doing math tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  16. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  17. Your Health Care May Kill You: Medical Errors.

    PubMed

    Anderson, James G; Abrahamson, Kathleen

    2017-01-01

    Recent studies of medical errors have estimated errors may account for as many as 251,000 deaths annually in the United States (U.S)., making medical errors the third leading cause of death. Error rates are significantly higher in the U.S. than in other developed countries such as Canada, Australia, New Zealand, Germany and the United Kingdom (U.K). At the same time less than 10 percent of medical errors are reported. This study describes the results of an investigation of the effectiveness of the implementation of the MEDMARX Medication Error Reporting system in 25 hospitals in Pennsylvania. Data were collected on 17,000 errors reported by participating hospitals over a 12-month period. Latent growth curve analysis revealed that reporting of errors by health care providers increased significantly over the four quarters. At the same time, the proportion of corrective actions taken by the hospitals remained relatively constant over the 12 months. A simulation model was constructed to examine the effect of potential organizational changes resulting from error reporting. Four interventions were simulated. The results suggest that improving patient safety requires more than voluntary reporting. Organizational changes need to be implemented and institutionalized as well.

  18. The Systematics of Strong Lens Modeling Quantified: The Effects of Constraint Selection and Redshift Information on Magnification, Mass, and Multiple Image Predictability

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren

    2016-11-01

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  19. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Energy dependence of lithium fluoride dosemeter for high energy electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoku, S.; Sunayashiki, T.; Takeoka, S.

    1973-11-01

    A lithium fluoride and a Fricke dosemeter have been exposed simultaneously to /sup 60/Co gamma -rays and 10, 20, and 30 MeV electrons to study the energy dependence of the lithium fluoride dosemeter for high-energy electrons, with particular reference to possible significant reductions in the sensitivity of LiF phosphors for electrons as compared with /sup 60/Co gamma - rays. In the present study, the direct comparison excluded errors resulting from uncertainties about ion recombination and conversion factors from roentgens to rads for ionization chambers. The dosemeters were exposed to approximately 5000 rads of each radiation at the appropriate peak depthmore » in a water phantom. Corrections for the supra-linear response for LiF were made using a dose response curve for /sup 60/Co gamma -rays. The three types of LiF phosphor examined did not exhibit any energy dependence for electrons compared with /sup 60/Co gamma - rays. Within the statistical uncertainty (~3%) for the experiment. (UK)« less

  1. The Neural Basis of Error Detection: Conflict Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Yeung, Nick; Botvinick, Matthew M.; Cohen, Jonathan D.

    2004-01-01

    According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an…

  2. A Mechanism for Error Detection in Speeded Response Time Tasks

    ERIC Educational Resources Information Center

    Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.

    2005-01-01

    The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…

  3. Nonlinear optical imaging for sensitive detection of crystals in bulk amorphous powders.

    PubMed

    Kestur, Umesh S; Wanapun, Duangporn; Toth, Scott J; Wegiel, Lindsay A; Simpson, Garth J; Taylor, Lynne S

    2012-11-01

    The primary aim of this study was to evaluate the utility of second-order nonlinear imaging of chiral crystals (SONICC) to quantify crystallinity in drug-polymer blends, including solid dispersions. Second harmonic generation (SHG) can potentially exhibit scaling with crystallinity between linear and quadratic depending on the nature of the source, and thus, it is important to determine the response of pharmaceutical powders. Physical mixtures containing different proportions of crystalline naproxen and hydroxyl propyl methyl cellulose acetate succinate (HPMCAS) were prepared by blending and a dispersion was produced by solvent evaporation. A custom-built SONICC instrument was used to characterize the SHG intensity as a function of the crystalline drug fraction in the various samples. Powder X-ray diffraction (PXRD) and Raman spectroscopy were used as complementary methods known to exhibit linear scaling. SONICC was able to detect crystalline drug even in the presence of 99.9 wt % HPMCAS in the binary mixtures. The calibration curve revealed a linear dynamic range with a R(2) value of 0.99 spanning the range from 0.1 to 100 wt % naproxen with a root mean square error of prediction of 2.7%. Using the calibration curve, the errors in the validation samples were in the range of 5%-10%. Analysis of a 75 wt % HPMCAS-naproxen solid dispersion with SONICC revealed the presence of crystallites at an earlier time point than could be detected with PXRD and Raman spectroscopy. In addition, results from the crystallization kinetics experiment using SONICC were in good agreement with Raman spectroscopy and PXRD. In conclusion, SONICC has been found to be a sensitive technique for detecting low levels (0.1% or lower) of crystallinity, even in the presence of large quantities of a polymer. Copyright © 2012 Wiley-Liss, Inc.

  4. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  5. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  6. Rate Constants for Fine-Structure Excitations in O - H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman

    2017-04-01

    Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.

  7. Exploring the link between environmental pollution and economic growth in EU-28 countries: Is there an environmental Kuznets curve?

    PubMed Central

    Armeanu, Daniel; Vintilă, Georgeta; Gherghina, Ştefan Cristian; Drăgoi, Mihaela Cristina; Teodor, Cristian

    2018-01-01

    This study examines the Environmental Kuznets Curve hypothesis (EKC), considering the primary energy consumption among other country-specific variables, for a panel of the EU-28 countries during the period 1990–2014. By estimating pooled OLS regressions with Driscoll-Kraay standard errors in order to account for cross-sectional dependence, the results confirm the EKC hypothesis in the case of emissions of sulfur oxides and emissions of non-methane volatile organic compounds. In addition to pooled estimations, the output of fixed-effects regressions with Driscoll-Kraay standard errors support the EKC hypothesis for greenhouse gas emissions, greenhouse gas emissions intensity of energy consumption, emissions of nitrogen oxides, emissions of non-methane volatile organic compounds and emissions of ammonia. Additionally, the empirical findings from panel vector error correction model reveal a short-run unidirectional causality from GDP per capita growth to greenhouse gas emissions, as well as a bidirectional causal link between primary energy consumption and greenhouse gas emissions. Furthermore, since there occurred no causal link between economic growth and primary energy consumption, the neo-classical view was confirmed, namely the neutrality hypothesis. PMID:29742169

  8. Exploring the link between environmental pollution and economic growth in EU-28 countries: Is there an environmental Kuznets curve?

    PubMed

    Armeanu, Daniel; Vintilă, Georgeta; Andrei, Jean Vasile; Gherghina, Ştefan Cristian; Drăgoi, Mihaela Cristina; Teodor, Cristian

    2018-01-01

    This study examines the Environmental Kuznets Curve hypothesis (EKC), considering the primary energy consumption among other country-specific variables, for a panel of the EU-28 countries during the period 1990-2014. By estimating pooled OLS regressions with Driscoll-Kraay standard errors in order to account for cross-sectional dependence, the results confirm the EKC hypothesis in the case of emissions of sulfur oxides and emissions of non-methane volatile organic compounds. In addition to pooled estimations, the output of fixed-effects regressions with Driscoll-Kraay standard errors support the EKC hypothesis for greenhouse gas emissions, greenhouse gas emissions intensity of energy consumption, emissions of nitrogen oxides, emissions of non-methane volatile organic compounds and emissions of ammonia. Additionally, the empirical findings from panel vector error correction model reveal a short-run unidirectional causality from GDP per capita growth to greenhouse gas emissions, as well as a bidirectional causal link between primary energy consumption and greenhouse gas emissions. Furthermore, since there occurred no causal link between economic growth and primary energy consumption, the neo-classical view was confirmed, namely the neutrality hypothesis.

  9. Measurement error of Young’s modulus considering the gravity and thermal expansion of thin specimens for in situ tensile testing

    NASA Astrophysics Data System (ADS)

    Ma, Zhichao; Zhao, Hongwei; Ren, Luquan

    2016-06-01

    Most miniature in situ tensile devices compatible with scanning/transmission electron microscopes or optical microscopes adopt a horizontal layout. In order to analyze and calculate the measurement error of the tensile Young’s modulus, the effects of gravity and temperature changes, which would respectively lead to and intensify the bending deformation of thin specimens, are considered as influencing factors. On the basis of a decomposition method of static indeterminacy, equations of simplified deflection curves are obtained and, accordingly, the actual gage length is confirmed. By comparing the effects of uniaxial tensile load on the change of the deflection curve with gravity, the relation between the actual and directly measured tensile Young’s modulus is obtained. Furthermore, the quantitative effects of ideal gage length l o, temperature change ΔT and the density ρ of the specimen on the modulus difference and modulus ratio are calculated. Specimens with larger l o and ρ present more obvious measurement errors for Young’s modulus, but the effect of ΔT is not significant. The calculation method of Young’s modulus is particularly suitable for thin specimens.

  10. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  11. A New Model Based on Adaptation of the External Loop to Compensate the Hysteresis of Tactile Sensors

    PubMed Central

    Sánchez-Durán, José A.; Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Castellanos-Ramos, Julián; Hidalgo-López, José A.

    2015-01-01

    This paper presents a novel method to compensate for hysteresis nonlinearities observed in the response of a tactile sensor. The External Loop Adaptation Method (ELAM) performs a piecewise linear mapping of the experimentally measured external curves of the hysteresis loop to obtain all possible internal cycles. The optimal division of the input interval where the curve is approximated is provided by the error minimization algorithm. This process is carried out off line and provides parameters to compute the split point in real time. A different linear transformation is then performed at the left and right of this point and a more precise fitting is achieved. The models obtained with the ELAM method are compared with those obtained from three other approaches. The results show that the ELAM method achieves a more accurate fitting. Moreover, the involved mathematical operations are simpler and therefore easier to implement in devices such as Field Programmable Gate Array (FPGAs) for real time applications. Furthermore, the method needs to identify fewer parameters and requires no previous selection process of operators or functions. Finally, the method can be applied to other sensors or actuators with complex hysteresis loop shapes. PMID:26501279

  12. An Accurately Controlled Antagonistic Shape Memory Alloy Actuator with Self-Sensing

    PubMed Central

    Wang, Tian-Miao; Shi, Zhen-Yun; Liu, Da; Ma, Chen; Zhang, Zhen-Hua

    2012-01-01

    With the progress of miniaturization, shape memory alloy (SMA) actuators exhibit high energy density, self-sensing ability and ease of fabrication, which make them well suited for practical applications. This paper presents a self-sensing controlled actuator drive that was designed using antagonistic pairs of SMA wires. Under a certain pre-strain and duty cycle, the stress between two wires becomes constant. Meanwhile, the strain to resistance curve can minimize the hysteresis gap between the heating and the cooling paths. The curves of both wires are then modeled by fitting polynomials such that the measured resistance can be used directly to determine the difference between the testing values and the target strain. The hysteresis model of strains to duty cycle difference has been used as compensation. Accurate control is demonstrated through step response and sinusoidal tracking. The experimental results show that, under a combination control program, the root-mean-square error can be reduced to 1.093%. The limited bandwidth of the frequency is estimated to be 0.15 Hz. Two sets of instruments with three degrees of freedom are illustrated to show how this type actuator could be potentially implemented. PMID:22969368

  13. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  14. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    PubMed

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  15. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    PubMed Central

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2015-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058

  16. A climate index indicative of cloudiness derived from satellite infrared sounder data

    NASA Technical Reports Server (NTRS)

    Abel, M. D.; Cox, S. K.

    1981-01-01

    In many current studies conducted to enhance the usefulness of meteorological satellite radiance data, one common objective is to infer conventional weather variables. The present investigation, on the other hand, is mainly concerned with the efficient retrieval (minimization of errors) of a nonstandard atmospheric descriptor. The atmosphere's Vertical Infrared Radiative Emitting Structure (VIRES) is retrieved. VIRES is described by the broadband infrared weighting function curve. The shapes of these weighting curves are primarily a function of the three-dimensional cloud structure. The weighting curves are retrieved by a method which uses satellite spectral radiance data. The basic theory involved in the VIRES retrieval procedure parallels the technique used to retrieve temperature soundings.

  17. Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2011-01-01

    Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…

  18. Systematic analysis of the scatter environment in clinical intra-operative high dose rate (IOHDR) brachytherapy

    NASA Astrophysics Data System (ADS)

    Oh, Moonseong

    Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. In intra-operative high dose rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in serious underdosage during treatment. Therefore, full analyses of the physical processes such as the photoelectric effect, Rayleigh, and Compton scattering that contribute to dosimetric errors have to be investigated and documented to result in more accurate treatment delivery to patients undergoing IOHDR procedures. Monte Carlo simulation results showed the Compton scattering effect is about 40 times more probable than photoelectric effect for the treated areas of single source, 4 x 4, and 2 x 4 cm2. Also, the dose variations with and without photoelectric effect were 0.3 ˜ 0.7%, which are within the uncertainty in Monte Carlo simulations. Also, Monte Carlo simulation studies were done to verify the following experimental results for quantification of dosimetric errors in clinical IOHDR brachytherapy. The first experimental study was performed to quantify the inaccuracy in clinical dose delivery due to the incomplete scatter conditions inherent in IOHDR brachytherapy. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm2), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. The study was extended to confirm the underdosage for various shape of treated area (especially, irregular shape), which can be applied in clinical cases. Treatment plans of 10 patients previously treated at Roswell Park Cancer Institute in Buffalo, which had irregular shapes of treated areas, were used. In IOHDR brachytherapy, a 2-dimensional (2-D) planar geometry is typically used without considering the curved shape of target surfaces. In clinical cases, this assumption of the planar geometry may cause the serious dose delivery errors to target volumes. The second study was performed to investigate the dose errors to curved surfaces. Seven rectangular shaped plans (five for 1.0 cm and two for 0.5 cm prescription depth) and archived irregular shaped plans of 2 patients were analyzed. Cylindrical phantoms with six radii (ranged 1.35 to 12.5 cm) were used to simulate the treatment planning geometries, which were calculated in 2-D plans. Actual doses delivered to prescription points were over-estimated up to 15% on the concave side of curved applicators for all cylindrical phantoms with 1.0 cm prescription depth. Also, delivered doses decreased by up to 10% on the convex side of curved applicators for small treated areas (≤ 5catheters), but interestingly, any dose dependence was not shown with large treated areas. Our measurements have shown inaccuracy in dose delivery when the original planar treatment plan was delivered in a curved applicator setting. Dose errors arising due to the tumor curvature may be significant in a clinical set up and merit attention during planning.

  19. Development of a Precise Polarization Modulator for UV Spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Ishikawa, S.; Shimizu, T.; Kano, R.; Bando, T.; Ishikawa, R.; Giono, G.; Tsuneta, S.; Nakayama, S.; Tajima, T.

    2015-10-01

    We developed a polarization modulation unit (PMU) to rotate a waveplate continuously in order to observe solar magnetic fields by spectropolarimetry. The non-uniformity of the PMU rotation may cause errors in the measurement of the degree of linear polarization (scale error) and its angle (crosstalk between Stokes-Q and -U), although it does not cause an artificial linear polarization signal (spurious polarization). We rotated a waveplate with the PMU to obtain a polarization modulation curve and estimated the scale error and crosstalk caused by the rotation non-uniformity. The estimated scale error and crosstalk were {<} 0.01 % for both. This PMU will be used as a waveplate motor for the Chromospheric Lyman-Alpha SpectroPolarimeter (CLASP) rocket experiment. We confirm that the PMU performs and functions sufficiently well for CLASP.

  20. A comparative study of electric load curve changes in an urban low-voltage substation in Spain during the economic crisis (2008-2013).

    PubMed

    Lara-Santillán, Pedro M; Mendoza-Villena, Montserrat; Fernández-Jiménez, L Alfredo; Mañana-Canteli, Mario

    2014-01-01

    This paper presents a comparative study of the electricity consumption (EC) in an urban low-voltage substation before and during the economic crisis (2008-2013). This low-voltage substation supplies electric power to near 400 users. The EC was measured for an 11-year period (2002-2012) with a sampling time of 1 minute. The study described in the paper consists of detecting the changes produced in the load curves of this substation along the time due to changes in the behaviour of consumers. The EC was compared using representative curves per time period (precrisis and crisis). These representative curves were obtained after a computational process, which was based on a search for days with similar curves to the curve of a determined (base) date. This similitude was assessed by the proximity on the calendar, day of the week, daylight time, and outdoor temperature. The last selection parameter was the error between the nearest neighbour curves and the base date curve. The obtained representative curves were linearized to determine changes in their structure (maximum and minimum consumption values, duration of the daily time slot, etc.). The results primarily indicate an increase in the EC in the night slot during the summer months in the crisis period.

  1. The Calibration of the Slotted Section for Precision Microwave Measurements

    DTIC Science & Technology

    1952-03-01

    Calibration Curve for lossless Structures B. The Correction Relations for Dis’sipative Structures C The Effect of an Error in the Variable Short...a’discussipn of protoe effects ? and a methpd of correction? for large insertion depths are given in the literature-* xhrs. reppirt is _ cpnceraed...solely with error source fcp)v *w w«v 3Jhe: presence of the slot in the slptted section Intro dub« effects ? fa)" the slot, loads the vmyeguide

  2. Validity of mail survey data on bagged waterfowl

    USGS Publications Warehouse

    Atwood, E.L.

    1956-01-01

    Knowledge of the pattern of occurrence and characteristics of response errors obtained during an investigation of the validity of post-season surveys of hunters was used to advantage to devise a two-step method for removing the response-bias errors from the raw survey data. The method was tested on data with known errors and found to have a high efficiency in reducing the effect of response-bias errors. The development of this method for removing the effect of the response-bias errors, and its application to post-season hunter-take survey data, increased the reliability of the data from below the point of practical management significance up to the approximate reliability limits corresponding to the sampling errors.

  3. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  4. Cone photoreceptor sensitivities and unique hue chromatic responses: correlation and causation imply the physiological basis of unique hues.

    PubMed

    Pridmore, Ralph W

    2013-01-01

    This paper relates major functions at the start and end of the color vision process. The process starts with three cone photoreceptors transducing light into electrical responses. Cone sensitivities were once expected to be Red Green Blue color matching functions (to mix colors) but microspectrometry proved otherwise: they instead peak in yellowish, greenish, and blueish hues. These physiological functions are an enigma, unmatched with any set of psychophysical (behavioral) functions. The end-result of the visual process is color sensation, whose essential percepts are unique (or pure) hues red, yellow, green, blue. Unique hues cannot be described by other hues, but can describe all other hues, e.g., that hue is reddish-blue. They are carried by four opponent chromatic response curves but the literature does not specify whether each curve represents a range of hues or only one hue (a unique) over its wavelength range. Here the latter is demonstrated, confirming that opponent chromatic responses define, and may be termed, unique hue chromatic responses. These psychophysical functions also are an enigma, unmatched with any physiological functions or basis. Here both enigmas are solved by demonstrating the three cone sensitivity curves and the three spectral chromatic response curves are almost identical sets (Pearson correlation coefficients r from 0.95-1.0) in peak wavelengths, curve shapes, math functions, and curve crossover wavelengths, though previously unrecognized due to presentation of curves in different formats, e.g., log, linear. (Red chromatic response curve is largely nonspectral and thus derives from two cones.) Close correlation combined with deterministic causation implies cones are the physiological basis of unique hues. This match of three physiological and three psychophysical functions is unique in color vision.

  5. Cone Photoreceptor Sensitivities and Unique Hue Chromatic Responses: Correlation and Causation Imply the Physiological Basis of Unique Hues

    PubMed Central

    Pridmore, Ralph W.

    2013-01-01

    This paper relates major functions at the start and end of the color vision process. The process starts with three cone photoreceptors transducing light into electrical responses. Cone sensitivities were once expected to be Red Green Blue color matching functions (to mix colors) but microspectrometry proved otherwise: they instead peak in yellowish, greenish, and blueish hues. These physiological functions are an enigma, unmatched with any set of psychophysical (behavioral) functions. The end-result of the visual process is color sensation, whose essential percepts are unique (or pure) hues red, yellow, green, blue. Unique hues cannot be described by other hues, but can describe all other hues, e.g., that hue is reddish-blue. They are carried by four opponent chromatic response curves but the literature does not specify whether each curve represents a range of hues or only one hue (a unique) over its wavelength range. Here the latter is demonstrated, confirming that opponent chromatic responses define, and may be termed, unique hue chromatic responses. These psychophysical functions also are an enigma, unmatched with any physiological functions or basis. Here both enigmas are solved by demonstrating the three cone sensitivity curves and the three spectral chromatic response curves are almost identical sets (Pearson correlation coefficients r from 0.95–1.0) in peak wavelengths, curve shapes, math functions, and curve crossover wavelengths, though previously unrecognized due to presentation of curves in different formats, e.g., log, linear. (Red chromatic response curve is largely nonspectral and thus derives from two cones.) Close correlation combined with deterministic causation implies cones are the physiological basis of unique hues. This match of three physiological and three psychophysical functions is unique in color vision. PMID:24204755

  6. Determining decision thresholds and evaluating indicators when conservation status is measured as a continuum.

    PubMed

    Connors, B M; Cooper, A B

    2014-12-01

    Categorization of the status of populations, species, and ecosystems underpins most conservation activities. Status is often based on how a system's current indicator value (e.g., change in abundance) relates to some threshold of conservation concern. Receiver operating characteristic (ROC) curves can be used to quantify the statistical reliability of indicators of conservation status and evaluate trade-offs between correct (true positive) and incorrect (false positive) classifications across a range of decision thresholds. However, ROC curves assume a discrete, binary relationship between an indicator and the conservation status it is meant to track, which is a simplification of the more realistic continuum of conservation status, and may limit the applicability of ROC curves in conservation science. We describe a modified ROC curve that treats conservation status as a continuum rather than a discrete state. We explored the influence of this continuum and typical sources of variation in abundance that can lead to classification errors (i.e., random variation and measurement error) on the true and false positive rates corresponding to varying decision thresholds and the reliability of change in abundance as an indicator of conservation status, respectively. We applied our modified ROC approach to an indicator of endangerment in Pacific salmon (Oncorhynchus nerka) (i.e., percent decline in geometric mean abundance) and an indicator of marine ecosystem structure and function (i.e., detritivore biomass). Failure to treat conservation status as a continuum when choosing thresholds for indicators resulted in the misidentification of trade-offs between true and false positive rates and the overestimation of an indicator's reliability. We argue for treating conservation status as a continuum when ROC curves are used to evaluate decision thresholds in indicators for the assessment of conservation status. © 2014 Society for Conservation Biology.

  7. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  8. Presenting new exoplanet candidates for the CoRoT chromatic light curves

    NASA Astrophysics Data System (ADS)

    Boufleur, Rodrigo; Emilio, Marcelo; Andrade, Laerte; Janot-Pacheco, Eduardo; De La Reza, Ramiro

    2015-08-01

    One of the most promising topics of modern Astronomy is the discovery and characterization of extrasolar planets due to its importance for the comprehension of planetary formation and evolution. Missions like MOST (Microvariability and Oscillations of Stars Telescope) (Walker et al., 2003) and especially the satellites dedicated to the search for exoplanets CoRoT (Convection, Rotation and planetary Transits) (Baglin et al., 1998) and Kepler (Borucki et al., 2003) produced a great amount of data and together account for hundreds of new discoveries. An important source of error in the search for planets with light curves obtained from space observatories are the displacements occuring in the data due to external causes. This artificial charge generation phenomenon associated with the data is mainly caused by the impact of high energy particles onto the CCD (Pinheiro da Silva et al. 2008), although other sources of error, not as well known also need to be taken into account. So, an effective analysis of the light curves depends a lot on the mechanisms employed to deal with these phenomena. To perform our research, we developed and applied a different method to fix the light curves, the CDAM (Corot Detrend Algorithm Modified), inspired by the work of Mislis et al. (2012). The paradigms were obtained using the BLS method (Kovács et al., 2002). After a semiautomatic pre-analysis associated with a visual inspection of the planetary transits signatures, we obtained dozens of exoplanet candidates in very good agreement with the literature and also new unpublished cases. We present the study results and characterization of the new cases for the chromatic channel public light curves of the CoRoT satellite.

  9. The direct determination of dose-to-water using a water calorimeter.

    PubMed

    Schulz, R J; Wuu, C S; Weinhous, M S

    1987-01-01

    A flexible, temperature-regulated, water calorimeter has been constructed which consists of three nested cylinders. The innermost "core" is a 10 X 10 cm right cylinder made of glass, the contents of which are isolated from the environment. It has two Teflon-washered glass valves for filling, and two thermistors are supported at the center by glass capillary tubes. Surrounding the core is a "jacket" that provides approximately 2 cm of air insulation between the core and the "shield." The shield surrounds the jacket with a 2.5-cm layer of temperature-regulated water flowing at 51/min. The core is filled with highly purified water the gas content of which is established prior to filling. Convection currents, which may be induced by dose gradients or thermistor power dissipation, are eliminated by operating the calorimeter at 4 degrees C. Depending upon the power level of the thermistors, 15-200 microW, and the insulation provided by the glass capillary tubing, the temperature of the thermistors is higher than that of the surrounding water. To minimize potential errors caused by differences between calibration curves obtained at finite power levels, the zero-power-level calibration curve obtained by extrapolation is employed. Also the calorimeter response is corrected for the change in power level, and therefore thermistor temperature, that follows the resistance change caused by irradiation. The response of the calorimeter to 4-MV x rays has been compared to that of an ionization chamber irradiated in an identical geometry.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The shape of the glucose concentration curve during an oral glucose tolerance test predicts risk for type 1 diabetes.

    PubMed

    Ismail, Heba M; Xu, Ping; Libman, Ingrid M; Becker, Dorothy J; Marks, Jennifer B; Skyler, Jay S; Palmer, Jerry P; Sosenko, Jay M

    2018-01-01

    We aimed to examine: (1) whether specific glucose-response curve shapes during OGTTs are predictive of type 1 diabetes development; and (2) the extent to which the glucose-response curve is influenced by insulin secretion. Autoantibody-positive relatives of people with type 1 diabetes whose baseline OGTT met the definition of a monophasic or biphasic glucose-response curve were followed for the development of type 1 diabetes (n = 2627). A monophasic curve was defined as an increase in OGTT glucose between 30 and 90 min followed by a decline of ≥ 0.25 mmol/l between 90 and 120 min. A biphasic response curve was defined as a decrease in glucose after an initial increase, followed by a second increase of ≥ 0.25 mmol/l. Associations of type 1 diabetes risk with glucose curve shapes were examined using cumulative incidence curve comparisons and proportional hazards regression. C-peptide responses were compared with and without adjustments for potential confounders. The majority of participants had a monophasic curve at baseline (n = 1732 [66%] vs n = 895 [34%]). The biphasic group had a lower cumulative incidence of type 1 diabetes (p < 0.001), which persisted after adjustments for age, sex, BMI z score and number of autoantibodies (p < 0.001). Among the monophasic group, the risk of type 1 diabetes was greater for those with a glucose peak at 90 min than for those with a peak at 30 min; the difference persisted after adjustments (p < 0.001). Compared with the biphasic group, the monophasic group had a lower early C-peptide (30-0 min) response, a lower C-peptide index (30-0 min C-peptide/30-0 min glucose), as well as a greater 2 h C-peptide level (p < 0.001 for all). Those with biphasic glucose curves have a lower risk of progression to type 1 diabetes than those with monophasic curves, and the risk among the monophasic group is increased when the glucose peak occurs at 90 min than at 30 min. Differences in glucose curve shapes between the monophasic and biphasic groups appear to be related to C-peptide responses.

  12. Probabilistic assessment method of the non-monotonic dose-responses-Part I: Methodological approach.

    PubMed

    Chevillotte, Grégoire; Bernard, Audrey; Varret, Clémence; Ballet, Pascal; Bodin, Laurent; Roudot, Alain-Claude

    2017-08-01

    More and more studies aim to characterize non-monotonic dose response curves (NMDRCs). The greatest difficulty is to assess the statistical plausibility of NMDRCs from previously conducted dose response studies. This difficulty is linked to the fact that these studies present (i) few doses tested, (ii) a low sample size per dose, and (iii) the absence of any raw data. In this study, we propose a new methodological approach to probabilistically characterize NMDRCs. The methodology is composed of three main steps: (i) sampling from summary data to cover all the possibilities that may be presented by the responses measured by dose and to obtain a new raw database, (ii) statistical analysis of each sampled dose-response curve to characterize the slopes and their signs, and (iii) characterization of these dose-response curves according to the variation of the sign in the slope. This method allows characterizing all types of dose-response curves and can be applied both to continuous data and to discrete data. The aim of this study is to present the general principle of this probabilistic method which allows to assess the non-monotonic dose responses curves, and to present some results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Physical Properties of the X-Ray-Luminous SN 1978K in NGC 1313 from Multiwavelength Observations

    NASA Astrophysics Data System (ADS)

    Schlegel, Eric M.; Ryder, Stuart; Staveley-Smith, L.; Petre, R.; Colbert, E.; Dopita, M.; Campbell-Wilson, D.

    1999-12-01

    We update the light curves from the X-ray, optical, and radio bandpasses which we have assembled over the past decade and present two observations in the ultraviolet using the Hubble Space Telescope Faint Object Spectrograph. The HRI X-ray light curve is constant within the errors over the entire observation period. This behavior is confirmed in the ASCA GIS data obtained in 1993 and 1995. In the ultraviolet, we detected Lyα, the [Ne IV] 2422/2424 Å doublet, the Mg II doublet at 2800 Å, and a line at approximately 3190 Å that we attribute to He I 3187. Only the Mg II and He I lines are detected at SN 1978K's position. The optical light curve is formally constant within the errors, although a slight upward trend may be present. The radio light curve continues its steep decline. The longer time span of our radio observations compared to previous studies shows that SN 1978K is in the same class of highly X-ray and radio-luminous supernovae as SN 1986J and SN 1988Z. The [Ne IV] emission is spatially distant from the location of SN 1978K and originates in the preshocked matter. The Mg II doublet flux ratio implies the quantity of line optical depth times density of approximately 1014 cm-3 for its emission region. The emission site must lie in the shocked gas.

  14. Insights into the spurious long-range nature of local rs-dependent non-local exchange-correlation kernels

    DOE PAGES

    Lu, Deyu

    2016-08-05

    A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less

  15. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  16. Posterior archaeomagnetic dating for the early Medieval site Thunau am Kamp, Austria

    NASA Astrophysics Data System (ADS)

    Schnepp, Elisabeth; Lanos, Philippe; Obenaus, Martin

    2014-05-01

    The early medieval site Thunau am Kamp consists of a hill fort and a settlement with large burial ground at the bank of river Kamp. All these features are under archaeological investigation since many years. The settlement comprises many pit houses, some with stratigraphic order. Every pit house was equipped with at least one cupola oven and/or a hearth or fireplace. Sometimes the entire cupola was preserved. The site was occupied during the 9th and 10th AD according to potshards which seem to indicate two phases: In the older phase ovens were placed in the corner of the houses while during the younger phase they are found in the middle of the wall. In order to increase the archaeomagnetic data base 14 ovens have been sampled. They fill the temporal gap in the data base for Austria around 900 AD. Laboratory treatment included alternation field and thermal demagnetisations as well as rock magnetic experiments. The baked clay with was formed from a loess sediment has preserved stable directions. Apart from one exception the mean characteristic remanent magnetization directions are concentrated around 900 AD on the early medieval part of the directional archaeomagnetic reference curve of Austria (Schnepp & Lanos, GJI, 2006). Using this curve archaeomagnetic dating with RenDate provides ages between 800 and 1100 AD which are in agreement with archaeological dating. In one case archaeomagnetic dating is even more precise. Together with the archaeological age estimates and stratigraphic information the new data have been included into the database of the Austrian curve. It has been recalculated using a new version of RenCurve. The new data confine the curve and its error band considerably in the time interval 800 to 1100 AD. The curve calibration process also provides a probability density distribution for each structure which allows for posterior dating. This refines temporal errors considerably. Usefulness of such an approach and archaeological implications will be discussed.

  17. Renal Parenchymal Area Growth Curves for Children 0 to 10 Months Old.

    PubMed

    Fischer, Katherine; Li, Chunming; Wang, Huixuan; Song, Yihua; Furth, Susan; Tasian, Gregory E

    2016-04-01

    Low renal parenchymal area, which is the gross area of the kidney in maximal longitudinal length minus the area of the collecting system, has been associated with increased risk of end stage renal disease during childhood in boys with posterior urethral valves. To our knowledge normal values do not exist. We aimed to increase the clinical usefulness of this measure by defining normal renal parenchymal area during infancy. In a cross-sectional study of children with prenatally detected mild unilateral hydronephrosis who were evaluated between 2000 and 2012 we measured the renal parenchymal area of normal kidney(s) opposite the kidney with mild hydronephrosis. Measurement was done with ultrasound from birth to post-gestational age 10 months. We used the LMS method to construct unilateral, bilateral, side and gender stratified normalized centile curves. We determined the z-score and the centile of a total renal parenchymal area of 12.4 cm(2) at post-gestational age 1 to 2 weeks, which has been associated with an increased risk of kidney failure before age 18 years in boys with posterior urethral valves. A total of 975 normal kidneys of children 0 to 10 months old were used to create renal parenchymal area centile curves. At the 97th centile for unilateral and single stratified curves the estimated margin of error was 4.4% to 8.8%. For bilateral and double stratified curves the estimated margin of error at the 97th centile was 6.6% to 13.2%. Total renal parenchymal area less than 12.4 cm(2) at post-gestational age 1 to 2 weeks had a z-score of -1.96 and fell at the 3rd percentile. These normal renal parenchymal area curves may be used to track kidney growth in infants and identify those at risk for chronic kidney disease progression. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. A curved surface micro-moiré method and its application in evaluating curved surface residual stress

    NASA Astrophysics Data System (ADS)

    Zhang, Hongye; Wu, Chenlong; Liu, Zhanwei; Xie, Huimin

    2014-09-01

    The moiré method is typically applied to the measurement of deformations of a flat surface while, for a curved surface, this method is rarely used other than for projection moiré or moiré interferometry. Here, a novel colour charge-coupled device (CCD) micro-moiré method has been developed, based on which a curved surface micro-moiré (CSMM) method is proposed with a colour CCD and optical microscope (OM). In the CSMM method, no additional reference grating is needed as a Bayer colour filter array (CFA) installed on the OM in front of the colour CCD image sensor performs this role. Micro-moiré fringes with high contrast are directly observed with the OM through the Bayer CFA under the special condition of observing a curved specimen grating. The principle of the CSMM method based on a colour CCD micro-moiré method and its application range and error analysis are all described in detail. In an experiment, the curved surface residual stress near a welded seam on a stainless steel tube was investigated using the CSMM method.

  19. Impacts of motivational valence on the error-related negativity elicited by full and partial errors.

    PubMed

    Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki

    2016-02-01

    Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Accommodation and age-dependent eye model based on in vivo measurements.

    PubMed

    Zapata-Díaz, Juan F; Radhakrishnan, Hema; Charman, W Neil; López-Gil, Norberto

    2018-03-21

    To develop a flexible model of the average eye that incorporates changes with age and accommodation in all optical parameters, including entrance pupil diameter, under photopic, natural, environmental conditions. We collated retrospective in vivo measurements of all optical parameters, including entrance pupil diameter. Ray-tracing was used to calculate the wavefront aberrations of the eye model as a function of age, stimulus vergence and pupil diameter. These aberrations were used to calculate objective refraction using paraxial curvature matching. This was also done for several stimulus positions to calculate the accommodation response/stimulus curve. The model predicts a hyperopic change in distance refraction as the eye ages (+0.22D every 10 years) between 20 and 65 years. The slope of the accommodation response/stimulus curve was 0.72 for a 25 years-old subject, with little change between 20 and 45 years. A trend to a more negative value of primary spherical aberration as the eye accommodates is predicted for all ages (20-50 years). When accommodation is relaxed, a slight increase in primary spherical aberration (0.008μm every 10 years) between 20 and 65 years is predicted, for an age-dependent entrance pupil diameter ranging between 3.58mm (20 years) and 3.05mm (65 years). Results match reasonably well with studies performed in real eyes, except that spherical aberration is systematically slightly negative as compared with the practical data. The proposed eye model is able to predict changes in objective refraction and accommodation response. It has the potential to be a useful design and testing tool for devices (e.g. intraocular lenses or contact lenses) designed to correct the eye's optical errors. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  1. Using statistical correlation to compare geomagnetic data sets

    NASA Astrophysics Data System (ADS)

    Stanton, T.

    2009-04-01

    The major features of data curves are often matched, to a first order, by bump and wiggle matching to arrive at an offset between data sets. This poster describes a simple statistical correlation program that has proved useful during this stage by determining the optimal correlation between geomagnetic curves using a variety of fixed and floating windows. Its utility is suggested by the fact that it is simple to run, yet generates meaningful data comparisons, often when data noise precludes the obvious matching of curve features. Data sets can be scaled, smoothed, normalised and standardised, before all possible correlations are carried out between selected overlapping portions of each curve. Best-fit offset curves can then be displayed graphically. The program was used to cross-correlate directional and palaeointensity data from Holocene lake sediments (Stanton et al., submitted) and Holocene lava flows. Some example curve matches are shown, including some that illustrate the potential of this technique when examining particularly sparse data sets. Stanton, T., Snowball, I., Zillén, L. and Wastegård, S., submitted. Detecting potential errors in varve chronology and 14C ages using palaeosecular variation curves, lead pollution history and statistical correlation. Quaternary Geochronology.

  2. U-SHAPED DOSE-RESPONSE CURVES: THEIR OCCURRENCE AND IMPLICATIONS FOR RISK ASSESSMENT

    EPA Science Inventory

    A class of curvilinear dose-response relationships in toxicological and epidemiological studies may be roughly described by "U-shaped curves. uch curves reflect an apparent reversal or inversion in the effect of an otherwise toxic agent at a low or intermediate region of the dose...

  3. Energy dependent calibration of XR-QA2 radiochromic film with monochromatic and polychromatic x-ray beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Lillo, F.; Mettivier, G., E-mail: mettivier@na.infn.it; Sarno, A.

    2016-01-15

    Purpose: This work investigates the energy response and dose-response curve determinations for XR-QA2 radiochromic film dosimetry system used for synchrotron radiation work and for quality assurance in diagnostic radiology, in the range of effective energies 18–46.5 keV. Methods: Pieces of XR-QA2 films were irradiated, in a plane transverse to the beam axis, with a monochromatic beam of energy in the range 18–40 keV at the ELETTRA synchrotron radiation facility (Trieste, Italy) and with a polychromatic beam from a laboratory x-ray tube operated at 80, 100, and 120 kV. The film calibration curve was expressed as air kerma (measured free-in-air withmore » an ionization chamber) versus the net optical reflectance change (netΔR) derived from the red channel of the RGB scanned film image. Four functional relationships (rational, linear exponential, power, and logarithm) were tested to evaluate the best curve for fitting the calibration data. The adequacy of the various fitting functions was tested by using the uncertainty analysis and by assessing the average of the absolute air kerma error calculated as the difference between calculated and delivered air kerma. The sensitivity of the film was evaluated as the ratio of the change in net reflectance to the corresponding air kerma. Results: The sensitivity of XR-QA2 films increased in the energy range 18–39 keV, with a maximum variation of about 170%, and decreased in the energy range 38–46.5 keV. The present results confirmed and extended previous findings by this and other groups, as regards the dose response of the radiochromic film XR-QA2 to monochromatic and polychromatic x-ray beams, respectively. Conclusions: The XR-QA2 radiochromic film response showed a strong dependence on beam energy for both monochromatic and polychromatic beams in the range of half value layer values from 0.55 to 6.1 mm Al and corresponding effective energies from 18 to 46.5 keV. In this range, the film response varied by 170%, from a minimum sensitivity of 0.0127 to a maximum sensitivity of 0.0219 at 10 mGy air kerma in air. The more suitable function for air kerma calibration of the XR-QA2 radiochromic film was the power function. A significant batch-to-batch variation, up to 55%, in film response at 120 kV (46.5 keV effective energy) was observed in comparison with published data.« less

  4. Anthropometric data error detecting and correction with a computer

    NASA Technical Reports Server (NTRS)

    Chesak, D. D.

    1981-01-01

    Data obtained with automated anthropometric data aquisition equipment was examined for short term errors. The least squares curve fitting technique was used to ascertain which data values were erroneous and to replace them, if possible, with corrected values. Errors were due to random reflections of light, masking of the light rays, and other types of optical and electrical interference. It was found that the signals were impossible to eliminate from the initial data produced by the television cameras, and that this was primarily a software problem requiring a digital computer to refine the data off line. The specific data of interest was related to the arm reach envelope of a human being.

  5. Real space mapping of oxygen vacancy diffusion and electrochemical transformations by hysteretic current reversal curve measurements

    DOEpatents

    Kalinin, Sergei V.; Balke, Nina; Borisevich, Albina Y.; Jesse, Stephen; Maksymovych, Petro; Kim, Yunseok; Strelcov, Evgheni

    2014-06-10

    An excitation voltage biases an ionic conducting material sample over a nanoscale grid. The bias sweeps a modulated voltage with increasing maximal amplitudes. A current response is measured at grid locations. Current response reversal curves are mapped over maximal amplitudes of the bias cycles. Reversal curves are averaged over the grid for each bias cycle and mapped over maximal bias amplitudes for each bias cycle. Average reversal curve areas are mapped over maximal amplitudes of the bias cycles. Thresholds are determined for onset and ending of electrochemical activity. A predetermined number of bias sweeps may vary in frequency where each sweep has a constant number of cycles and reversal response curves may indicate ionic diffusion kinetics.

  6. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  7. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    PubMed

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  8. Correcting intensity loss errors in the absence of texture-free reference samples during pole figure measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, Ahmed A., E-mail: asaleh@uow.edu.au

    Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less

  9. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less

  10. Generation of a pseudo-2D shear-wave velocity section by inversion of a series of 1D dispersion curves

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.

    2008-01-01

    Multichannel Analysis of Surface Waves utilizes a multichannel recording system to estimate near-surface shear (S)-wave velocities from high-frequency Rayleigh waves. A pseudo-2D S-wave velocity (vS) section is constructed by aligning 1D models at the midpoint of each receiver spread and using a spatial interpolation scheme. The horizontal resolution of the section is therefore most influenced by the receiver spread length and the source interval. The receiver spread length sets the theoretical lower limit and any vS structure with its lateral dimension smaller than this length will not be properly resolved in the final vS section. A source interval smaller than the spread length will not improve the horizontal resolution because spatial smearing has already been introduced by the receiver spread. In this paper, we first analyze the horizontal resolution of a pair of synthetic traces. Resolution analysis shows that (1) a pair of traces with a smaller receiver spacing achieves higher horizontal resolution of inverted S-wave velocities but results in a larger relative error; (2) the relative error of the phase velocity at a high frequency is smaller than at a low frequency; and (3) a relative error of the inverted S-wave velocity is affected by the signal-to-noise ratio of data. These results provide us with a guideline to balance the trade-off between receiver spacing (horizontal resolution) and accuracy of the inverted S-wave velocity. We then present a scheme to generate a pseudo-2D S-wave velocity section with high horizontal resolution using multichannel records by inverting high-frequency surface-wave dispersion curves calculated through cross-correlation combined with a phase-shift scanning method. This method chooses only a pair of consecutive traces within a shot gather to calculate a dispersion curve. We finally invert surface-wave dispersion curves of synthetic and real-world data. Inversion results of both synthetic and real-world data demonstrate that inverting high-frequency surface-wave dispersion curves - by a pair of traces through cross-correlation with phase-shift scanning method and with the damped least-square method and the singular-value decomposition technique - can feasibly achieve a reliable pseudo-2D S-wave velocity section with relatively high horizontal resolution. ?? 2008 Elsevier B.V. All rights reserved.

  11. Sustained attention to response task (SART) shows impaired vigilance in a spectrum of disorders of excessive daytime sleepiness.

    PubMed

    Van Schie, Mojca K M; Thijs, Roland D; Fronczek, Rolf; Middelkoop, Huub A M; Lammers, Gert Jan; Van Dijk, J Gert

    2012-08-01

    The sustained attention to response task comprises withholding key presses to one in nine of 225 target stimuli; it proved to be a sensitive measure of vigilance in a small group of narcoleptics. We studied sustained attention to response task results in 96 patients from a tertiary narcolepsy referral centre. Diagnoses according to ICSD-2 criteria were narcolepsy with (n=42) and without cataplexy (n=5), idiopathic hypersomnia without long sleep time (n=37), and obstructive sleep apnoea syndrome (n=12). The sustained attention to response task was administered prior to each of five multiple sleep latency test sessions. Analysis concerned error rates, mean reaction time, reaction time variability and post-error slowing, as well as the correlation of sustained attention to response task results with mean latency of the multiple sleep latency test and possible time of day influences. Median sustained attention to response task error scores ranged from 8.4 to 11.1, and mean reaction times from 332 to 366ms. Sustained attention to response task error score and mean reaction time did not differ significantly between patient groups. Sustained attention to response task error score did not correlate with multiple sleep latency test sleep latency. Reaction time was more variable as the error score was higher. Sustained attention to response task error score was highest for the first session. We conclude that a high sustained attention to response task error rate reflects vigilance impairment in excessive daytime sleepiness irrespective of its cause. The sustained attention to response task and the multiple sleep latency test reflect different aspects of sleep/wakefulness and are complementary. © 2011 European Sleep Research Society.

  12. Accurate photometric light curves of the lensed components of Q2237+0305 derived with an optimal image subtraction technique: Evidence for microlensing in image A

    NASA Astrophysics Data System (ADS)

    Moreau, O.; Libbrecht, C.; Lee, D.-W.; Surdej, J.

    2005-06-01

    Using an optimal image subtraction technique, we have derived the V and R light curves of the four lensed QSO components of Q2237+0305 from the monitoring CCD frames obtained by the GLITP collaboration with the 2.6 m NOT telescope in 1999/2000 (Alcalde et al. 2002). We give here a detailed account of the data reduction and analysis and of the error estimates. In agreement with Woźniak et al. (2000a,b), the good derived photometric accuracy of the GLITP data allows to discuss the possible interpretation of the light curve of component A as due to a microlensing event taking place in the deflecting galaxy. This interpretation is strengthened by the colour dependence of the early rise of the light curve of component A, as it probably corresponds to a caustics crossing by the QSO source.

  13. Application of the differential decay-curve method to γ-γ fast-timing lifetime measurements

    NASA Astrophysics Data System (ADS)

    Petkov, P.; Régis, J.-M.; Dewald, A.; Kisyov, S.

    2016-10-01

    A new procedure for the analysis of delayed-coincidence lifetime experiments focused on the Fast-timing case is proposed following the approach of the Differential decay-curve method. Examples of application of the procedure on experimental data reveal its reliability for lifetimes even in the sub-nanosecond range. The procedure is expected to improve both precision/reliability and treatment of systematic errors and scarce data as well as to provide an option for cross-check with the results obtained by means of other analyzing methods.

  14. Can binary early warning scores perform as well as standard early warning scores for discriminating a patient's risk of cardiac arrest, death or unanticipated intensive care unit admission?

    PubMed

    Jarvis, Stuart; Kovacs, Caroline; Briggs, Jim; Meredith, Paul; Schmidt, Paul E; Featherstone, Peter I; Prytherch, David R; Smith, Gary B

    2015-08-01

    Although the weightings to be summed in an early warning score (EWS) calculation are small, calculation and other errors occur frequently, potentially impacting on hospital efficiency and patient care. Use of a simpler EWS has the potential to reduce errors. We truncated 36 published 'standard' EWSs so that, for each component, only two scores were possible: 0 when the standard EWS scored 0 and 1 when the standard EWS scored greater than 0. Using 1564,153 vital signs observation sets from 68,576 patient care episodes, we compared the discrimination (measured using the area under the receiver operator characteristic curve--AUROC) of each standard EWS and its truncated 'binary' equivalent. The binary EWSs had lower AUROCs than the standard EWSs in most cases, although for some the difference was not significant. One system, the binary form of the National Early Warning System (NEWS), had significantly better discrimination than all standard EWSs, except for NEWS. Overall, Binary NEWS at a trigger value of 3 would detect as many adverse outcomes as are detected by NEWS using a trigger of 5, but would require a 15% higher triggering rate. The performance of Binary NEWS is only exceeded by that of standard NEWS. It may be that Binary NEWS, as a simplified system, can be used with fewer errors. However, its introduction could lead to significant increases in workload for ward and rapid response team staff. The balance between fewer errors and a potentially greater workload needs further investigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Intra and interrater reliability of spinal sagittal curves and mobility using pocket goniometer IncliMed® in healthy subjects.

    PubMed

    Alderighi, Marzia; Ferrari, Raffaello; Maghini, Irene; Del Felice, Alessandra; Masiero, Stefano

    2016-11-21

    Radiographic examination is the gold standard to evaluate spine curves, but ionising radiations limit routine use. Non-invasive methods, such as skin-surface goniometer (IncliMed®) should be used instead. To evaluate intra- and interrater reliability to assess sagittal curves and mobility of the spine with IncliMed®. a reliability study on agonistic football players. Thoracic kyphosis, lumbar lordosis and mobility of the spine were assessed by IncliMed®. Measurements were repeated twice by each examiner during the same session with between-rater blinding. Intrarater and interrater reliability were measured by Intraclass Correlation Coefficient (ICC), 95% Confidence Interval (CI 95%) and Standard Error of Measurement (SEM). Thirty-four healthy female football players (19.17 ± 4.52 years) were enrolled. Statistical results showed high intrarater (0.805-0.923) and interrater (0.701-0.886) reliability (ICC > 0.8). The obtained intra- and interrater SEM were low, with overall absolute intrarater values between 1.39° and 2.76° and overall interrater values between 1.71° and 4.25°. IncliMed® provides high intra- and interrater reliability in healthy subjects, with limited Standard Error of Measurement. These results encourage its use in clinical practice and scientific research.

  16. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware.

    PubMed

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  17. Efficacy of reciprocating and rotary NiTi instruments for retreatment of curved root canals assessed by micro-CT.

    PubMed

    Rödig, T; Reicherts, P; Konietschke, F; Dullin, C; Hahn, W; Hülsmann, M

    2014-10-01

    To compare the efficacy of reciprocating and rotary NiTi-instruments in removing filling material from curved root canals using micro-computed tomography. Sixty curved root canals were prepared and filled with gutta-percha and sealer. After determination of root canal curvatures and radii in two directions as well as volumes of filling material, the teeth were assigned to three comparable groups (n = 20). Retreatment was performed using Reciproc, ProTaper Universal Retreatment or Hedström files. Percentages of residual filling material and dentine removal were assessed using micro-CT imaging. Working time and procedural errors were recorded. Statistical analysis was performed by variance procedures. No significant differences amongst the three retreatment techniques concerning residual filling material were detected (P > 0.05). Hedström files removed significantly more dentine than ProTaper Universal Retreatment (P < 0.05), but the difference concerning dentine removal between both NiTi systems was not significant (P > 0.05). Reciproc and ProTaper Universal Retreatment were significantly faster than Hedström files (P = 0.0001). No procedural errors such as instrument fracture, blockage, ledging or perforation were detected for Hedström files. Three perforations were recorded for ProTaper Universal Retreatment, and in both NiTi groups, one instrument fracture occured. Remnants of filling material were observed in all samples with no significant differences between the three techniques. Hedström files removed significantly more dentine than ProTaper Universal Retreatment, but no significant differences between both NiTi systems were detected. Procedural errors were observed with ProTaper Universal Retreatment and Reciproc. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  18. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  19. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware

    NASA Astrophysics Data System (ADS)

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  20. Comparison of gating methods for the real-time analysis of left ventricular function in nonimaging blood pool studies.

    PubMed

    Beard, B B; Stewart, J R; Shiavi, R G; Lorenz, C H

    1995-01-01

    Gating methods developed for electrocardiographic-triggered radionuclide ventriculography are being used with nonimaging detectors. These methods have not been compared on the basis of their real-time performance or suitability for determination of load-independent indexes of left ventricular function. This work evaluated the relative merits of different gating methods for nonimaging radionuclude ventriculographic studies, with particular emphasis on their suitability for real-time measurements and the determination of pressure-volume loops. A computer model was used to investigate the relative accuracy of forward gating, backward gating, and phase-mode gating. The durations of simulated left ventricular time-activity curves were randomly varied. Three acquisition parameters were considered: frame rate, acceptance window, and sample size. Twenty-five studies were performed for each combination of acquisition parameters. Hemodynamic and shape parameters from each study were compared with reference parameters derived directly from the random time-activity curves. Backward gating produced the largest errors under all conditions. For both forward gating and phase-mode gating, ejection fraction was underestimated and time to end systole and normalized peak ejection rate were overestimated. For the hemodynamic parameters, forward gating was marginally superior to phase-mode gating. The mean difference in errors between forward and phase-mode gating was 1.47% (SD 2.78%). However, for root mean square shape error, forward gating was several times worse in every case and seven times worse than phase-mode gating on average. Both forward and phase-mode gating are suitable for real-time hemodynamic measurements by nonimaging techniques. The small statistical difference between the methods is not clinically significant. The true shape of the time-activity curve is maintained most accurately by phase-mode gating.

  1. Comparison of gating methods for the real-time analysis of left ventricular function in nonimaging blood pool studies

    PubMed Central

    Beard, Brian B.; Stewart, James R.; Shiavi, Richard G.; Lorenz, Christine H.

    2018-01-01

    Background Gating methods developed for electrocardiographic-triggered radionuclide ventriculography are being used with nonimaging detectors. These methods have not been compared on the basis of their real-time performance or suitability for determination of load-independent indexes of left ventricular function. This work evaluated the relative merits of different gating methods for nonimaging radionuclude ventriculographic studies, with particular emphasis on their suitability for real-time measurements and the determination of pressure-volume loops. Methods and Results A computer model was used to investigate the relative accuracy of forward gating, backward gating, and phase-mode gating. The durations of simulated left ventricular time-activity curves were randomly varied. Three acquisition parameters were considered: frame rate, acceptance window, and sample size. Twenty-five studies were performed for each combination of acquisition parameters. Hemodynamic and shape parameters from each study were compared with reference parameters derived directly from the random time-activity curves. Backward gating produced the largest errors under all conditions. For both forward gating and phase-mode gating, ejection fraction was underestimated and time to end systole and normalized peak ejection rate were overestimated. For the hemodynamic parameters, forward gating was marginally superior to phase-mode gating. The mean difference in errors between forward and phase-mode gating was 1.47% (SD 2.78%). However, for root mean square shape error, forward gating was several times worse in every case and seven times worse than phase-mode gating on average. Conclusions Both forward and phase-mode gating are suitable for real-time hemodynamic measurements by nonimaging techniques. The small statistical difference between the methods is not clinically significant. The true shape of the time-activity curve is maintained most accurately by phase-mode gating. PMID:9420820

  2. Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.

    PubMed

    Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J

    2017-02-01

    Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. A Comparative Study of Electric Load Curve Changes in an Urban Low-Voltage Substation in Spain during the Economic Crisis (2008–2013)

    PubMed Central

    Lara-Santillán, Pedro M.; Mendoza-Villena, Montserrat; Fernández-Jiménez, L. Alfredo; Mañana-Canteli, Mario

    2014-01-01

    This paper presents a comparative study of the electricity consumption (EC) in an urban low-voltage substation before and during the economic crisis (2008–2013). This low-voltage substation supplies electric power to near 400 users. The EC was measured for an 11-year period (2002–2012) with a sampling time of 1 minute. The study described in the paper consists of detecting the changes produced in the load curves of this substation along the time due to changes in the behaviour of consumers. The EC was compared using representative curves per time period (precrisis and crisis). These representative curves were obtained after a computational process, which was based on a search for days with similar curves to the curve of a determined (base) date. This similitude was assessed by the proximity on the calendar, day of the week, daylight time, and outdoor temperature. The last selection parameter was the error between the nearest neighbour curves and the base date curve. The obtained representative curves were linearized to determine changes in their structure (maximum and minimum consumption values, duration of the daily time slot, etc.). The results primarily indicate an increase in the EC in the night slot during the summer months in the crisis period. PMID:24895677

  4. Combining the test of memory malingering trial 1 with behavioral responses improves the detection of effort test failure.

    PubMed

    Denning, John Henry

    2014-01-01

    Validity measures derived from the Test of Memory Malingering Trial 1 (TOMM1) and errors across the first 10 items of TOMM1 (TOMMe10) may be further enhanced by combining these scores with "embedded" behavioral responses while patients complete these measures. In a sample of nondemented veterans (n = 151), five possible behavioral responses observed during completion of the first 10 items of the TOMM were combined with TOMM1 and TOMMe10 to assess any increased sensitivity in predicting Medical Symptom Validity Test (MSVT) performance. Both TOMM1 and TOMMe10 alone were highly accurate overall in predicting MSVT performance (TOMM1 [area under the curve (AUC)] = .95, TOMMe10 [AUC] = .92). The combination of TOMM measures and behavioral responses did not increase overall accuracy rates; however, when specificity was held at approximately 90%, there was a slight increase in sensitivity (+7%) for both TOMM measures when combined with the number of "point and name" responses. Examples are provided demonstrating that at a given TOMM score (TOMM1 or TOMMe10), with an increase in "point and name" responses, there is an incremental increase in the probability of failing the MSVT. Exploring the utility of combining freestanding or embedded validity measures with behavioral features during test administration should be encouraged.

  5. Measurement of large steel plates based on linear scan structured light scanning

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao

    2018-01-01

    A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.

  6. The effect of constraints on the analytical figures of merit achieved by extended multivariate curve resolution-alternating least-squares.

    PubMed

    Pellegrino Vidal, Rocío B; Allegrini, Franco; Olivieri, Alejandro C

    2018-03-20

    Multivariate curve resolution-alternating least-squares (MCR-ALS) is the model of choice when dealing with some non-trilinear arrays, specifically when the data are of chromatographic origin. To drive the iterative procedure to chemically interpretable solutions, the use of constraints becomes essential. In this work, both simulated and experimental data have been analyzed by MCR-ALS, applying chemically reasonable constraints, and investigating the relationship between selectivity, analytical sensitivity (γ) and root mean square error of prediction (RMSEP). As the selectivity in the instrumental modes decreases, the estimated values for γ did not fully represent the predictive model capabilities, judged from the obtained RMSEP values. Since the available sensitivity expressions have been developed by error propagation theory in unconstrained systems, there is a need of developing new expressions or analytical indicators. They should not only consider the specific profiles retrieved by MCR-ALS, but also the constraints under which the latter ones have been obtained. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. RED NOISE VERSUS PLANETARY INTERPRETATIONS IN THE MICROLENSING EVENT OGLE-2013-BLG-446

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachelet, E.; Bramich, D. M.; AlSubai, K.

    2015-10-20

    For all exoplanet candidates, the reliability of a claimed detection needs to be assessed through a careful study of systematic errors in the data to minimize the false positives rate. We present a method to investigate such systematics in microlensing data sets using the microlensing event OGLE-2013-BLG-0446 as a case study. The event was observed from multiple sites around the world and its high magnification (A{sub max} ∼ 3000) allowed us to investigate the effects of terrestrial and annual parallax. Real-time modeling of the event while it was still ongoing suggested the presence of an extremely low-mass companion (∼3M{sub ⨁})more » to the lensing star, leading to substantial follow-up coverage of the light curve. We test and compare different models for the light curve and conclude that the data do not favor the planetary interpretation when systematic errors are taken into account.« less

  8. Neural evidence for enhanced error detection in major depressive disorder.

    PubMed

    Chiu, Pearl H; Deldin, Patricia J

    2007-04-01

    Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.

  9. On the use of the covariance matrix to fit correlated data

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  10. Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).

    PubMed

    Domené, E A; Martínez, O E

    2013-01-01

    An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).

  11. Linking Parameters Estimated with the Generalized Graded Unfolding Model: A Comparison of the Accuracy of Characteristic Curve Methods

    ERIC Educational Resources Information Center

    Anderson Koenig, Judith; Roberts, James S.

    2007-01-01

    Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…

  12. Using the area under the curve to reduce measurement error in predicting young adult blood pressure from childhood measures.

    PubMed

    Cook, Nancy R; Rosner, Bernard A; Chen, Wei; Srinivasan, Sathanur R; Berenson, Gerald S

    2004-11-30

    Tracking correlations of blood pressure, particularly childhood measures, may be attenuated by within-person variability. Combining multiple measurements can reduce this error substantially. The area under the curve (AUC) computed from longitudinal growth curve models can be used to improve the prediction of young adult blood pressure from childhood measures. Quadratic random-effects models over unequally spaced repeated measures were used to compute the area under the curve separately within the age periods 5-14 and 20-34 years in the Bogalusa Heart Study. This method adjusts for the uneven age distribution and captures the underlying or average blood pressure, leading to improved estimates of correlation and risk prediction. Tracking correlations were computed by race and gender, and were approximately 0.6 for systolic, 0.5-0.6 for K4 diastolic, and 0.4-0.6 for K5 diastolic blood pressure. The AUC can also be used to regress young adult blood pressure on childhood blood pressure and childhood and young adult body mass index (BMI). In these data, while childhood blood pressure and young adult BMI were generally directly predictive of young adult blood pressure, childhood BMI was negatively correlated with young adult blood pressure when childhood blood pressure was in the model. In addition, racial differences in young adult blood pressure were reduced, but not eliminated, after controlling for childhood blood pressure, childhood BMI, and young adult BMI, suggesting that other genetic or lifestyle factors contribute to this difference. 2004 John Wiley & Sons, Ltd.

  13. The disappearing Environmental Kuznets Curve: a study of water quality in the Lower Mekong Basin (LMB).

    PubMed

    Wong, Yoon Loong Andrew; Lewis, Lynne

    2013-12-15

    The literature is flush with articles focused on estimating the Environmental Kuznets Curve (EKC) for various pollutants and various locations. Most studies have utilized air pollution variables; far fewer have utilized water quality variables, all with mixed results. We suspect that mixed evidence of the EKC stems from model and error specification. We analyze annual data for four water quality indicators, three of them previously unstudied - total phosphorus (TOTP), dissolved oxygen (DO), ammonium (NH4) and nitrites (NO2) - from the Lower Mekong Basin region to determine whether an Environmental Kuznets Curve (EKC) is evident for a transboundary river in a developing country and whether that curve is dependent on model specification and/or pollutant. We build upon previous studies by correcting for the problems of heteroskedasticity, serial correlation and cross-sectional dependence. Unlike multi-country EKC studies, we mitigate for potential distortion from pooling data across geographically heterogeneous locations by analyzing data drawn from proximate locations within a specific international river basin in Southeast Asia. We also attempt to identify vital socioeconomic determinants of water pollution by including a broad list of explanatory variables alongside the income term. Finally, we attempt to shed light on the pollution-income relationship as it pertains to trans-boundary water pollution by examining data from an international river system. We do not find consistent evidence of an EKC for any of the 4 pollutant indicators in this study, but find the results are entirely dependent on model and error specification as well as pollutant. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. [Proposal for the systematization of the elastographic study of mammary lesions through ultrasound scan].

    PubMed

    Fleury, Eduardo de Faria Castro; Fleury, Jose Carlos Vendramini; Oliveira, Vilmar Marques de; Rinaldi, Jose Francisco; Piato, Sebastiao; Roveda Junior, Decio

    2009-01-01

    Proposal of systematization for the elastographic study in the ultrasound routine. Evaluation was made of 308 patients forwarded to the breast intervention service in the CTC-Genesis from May 1, 2007 to March 1, 2008 to perform percutaneous breast biopsy. Prior to the percutaneous biopsy, an ultrasound study and an elastography were performed. Lesions were primarily analyzed and classified according to the Bi-Rads lexicon criteria by the conventional ultrasound scan (B mode). The elastography was then performed and analyzed in accordance with the systematization proposed by the authors, using images obtained during compression and after decompression of the area of interest. Lesions were classified following the system developed by the authors using a four-point scale, where scores (1) and (2) were considered benign, score (3) probably benign and score (4) suspicion of malignancy. Results obtained by the two methods were compared with the histological results using the areas within the ROC (receiver operator curves) curves. The area within the curve for elastography was of 0.952 with a confidence interval between 0.910 and 0.966, error of 0.023, and of 0.867 with a confidence interval between 0.823 and 0.903, error of 0.0333 for the ultrasound. When the areas were compared, a difference between the curves of 0.026 was observed, which was statistically significant. This work shows the systematization of the elastographic study using information obtained during compression and after decompression of the ultrasound scan sample, thus showing that elastography might enhance the assessment of risk of malignancy for lesions characterized by the ultrasound.

  15. Complex, non-monotonic dose-response curves with multiple maxima: Do we (ever) sample densely enough?

    PubMed

    Cvrčková, Fatima; Luštinec, Jiří; Žárský, Viktor

    2015-01-01

    We usually expect the dose-response curves of biological responses to quantifiable stimuli to be simple, either monotonic or exhibiting a single maximum or minimum. Deviations are often viewed as experimental noise. However, detailed measurements in plant primary tissue cultures (stem pith explants of kale and tobacco) exposed to varying doses of sucrose, cytokinins (BA or kinetin) or auxins (IAA or NAA) revealed that growth and several biochemical parameters exhibit multiple reproducible, statistically significant maxima over a wide range of exogenous substance concentrations. This results in complex, non-monotonic dose-response curves, reminiscent of previous reports of analogous observations in both metazoan and plant systems responding to diverse pharmacological treatments. These findings suggest the existence of a hitherto neglected class of biological phenomena resulting in dose-response curves exhibiting periodic patterns of maxima and minima, whose causes remain so far uncharacterized, partly due to insufficient sampling frequency used in many studies.

  16. Using Bayesian Inference Framework towards Identifying Gas Species and Concentration from High Temperature Resistive Sensor Array Data

    DOE PAGES

    Liu, Yixin; Zhou, Kai; Lei, Yu

    2015-01-01

    High temperature gas sensors have been highly demanded for combustion process optimization and toxic emissions control, which usually suffer from poor selectivity. In order to solve this selectivity issue and identify unknown reducing gas species (CO, CH 4 , and CH 8 ) and concentrations, a high temperature resistive sensor array data set was built in this study based on 5 reported sensors. As each sensor showed specific responses towards different types of reducing gas with certain concentrations, based on which calibration curves were fitted, providing benchmark sensor array response database, then Bayesian inference framework was utilized to process themore » sensor array data and build a sample selection program to simultaneously identify gas species and concentration, by formulating proper likelihood between input measured sensor array response pattern of an unknown gas and each sampled sensor array response pattern in benchmark database. This algorithm shows good robustness which can accurately identify gas species and predict gas concentration with a small error of less than 10% based on limited amount of experiment data. These features indicate that Bayesian probabilistic approach is a simple and efficient way to process sensor array data, which can significantly reduce the required computational overhead and training data.« less

  17. The effect of an inhaled neutral endopeptidase inhibitor, thiorphan, on airway responses to neurokinin A in normal humans in vivo.

    PubMed

    Cheung, D; Bel, E H; Den Hartigh, J; Dijkman, J H; Sterk, P J

    1992-06-01

    Neuropeptides such as neurokinin A (NKA) have been proposed as important mediators of bronchoconstriction and airway hyperresponsiveness in asthma. Inhaled NKA causes bronchoconstriction in patients with asthma, but not in normal subjects. This is possibly due to the activity of an endogenous neuropeptide-degrading enzyme: neutral endopeptidase (NEP). We investigated whether a NEP-inhibitor, thiorphan, reveals bronchoconstriction to NKA or NKA-induced changes in airway responsiveness to methacholine in normal humans in vivo. Eight normal male subjects participated in a double-blind crossover study, using thiorphan as pretreatment to NKA challenge. Dose-response curves to inhaled NKA (8 to 1,000 micrograms/ml, 0.5 ml/dose) were recorded on 2 randomized days 1 wk apart, and methacholine tests were performed 48 h before and 24 h after the NKA challenge. Ten minutes prior to NKA challenge the subjects inhaled either thiorphan (2.5 mg/ml, 0.5 ml) or placebo. To detect a possible nonspecific effect of thiorphan, we investigated the effect of the same pretreatment with thiorphan or placebo on the dose-response curve to methacholine in a separate set of experiments. The response was measured by the flow from standardized partial expiratory flow-volume curves (V40p), expressed in percent fall from baseline. NKA log dose-response curves were analyzed using the area under the curve (AUC) and the response to the highest dose of 1,000 micrograms/ml (V40p,1000). The methacholine dose-response curves were characterized by their position (PC40V40p) and the maximal-response plateau (MV40p). Baseline V40p was not affected by either pretreatment (p greater than 0.15).(ABSTRACT TRUNCATED AT 250 WORDS)

  18. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-07-17

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.

  19. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  20. Association of Elevated Reward Prediction Error Response With Weight Gain in Adolescent Anorexia Nervosa.

    PubMed

    DeGuzman, Marisa; Shott, Megan E; Yang, Tony T; Riederer, Justin; Frank, Guido K W

    2017-06-01

    Anorexia nervosa is a psychiatric disorder of unknown etiology. Understanding associations between behavior and neurobiology is important in treatment development. Using a novel monetary reward task during functional magnetic resonance brain imaging, the authors tested how brain reward learning in adolescent anorexia nervosa changes with weight restoration. Female adolescents with anorexia nervosa (N=21; mean age, 16.4 years [SD=1.9]) underwent functional MRI (fMRI) before and after treatment; similarly, healthy female control adolescents (N=21; mean age, 15.2 years [SD=2.4]) underwent fMRI on two occasions. Brain function was tested using the reward prediction error construct, a computational model for reward receipt and omission related to motivation and neural dopamine responsiveness. Compared with the control group, the anorexia nervosa group exhibited greater brain response 1) for prediction error regression within the caudate, ventral caudate/nucleus accumbens, and anterior and posterior insula, 2) to unexpected reward receipt in the anterior and posterior insula, and 3) to unexpected reward omission in the caudate body. Prediction error and unexpected reward omission response tended to normalize with treatment, while unexpected reward receipt response remained significantly elevated. Greater caudate prediction error response when underweight was associated with lower weight gain during treatment. Punishment sensitivity correlated positively with ventral caudate prediction error response. Reward system responsiveness is elevated in adolescent anorexia nervosa when underweight and after weight restoration. Heightened prediction error activity in brain reward regions may represent a phenotype of adolescent anorexia nervosa that does not respond well to treatment. Prediction error response could be a neurobiological marker of illness severity that can indicate individual treatment needs.

  1. Association of Elevated Reward Prediction Error Response With Weight Gain in Adolescent Anorexia Nervosa

    PubMed Central

    DeGuzman, Marisa; Shott, Megan E.; Yang, Tony T.; Riederer, Justin; Frank, Guido K.W.

    2017-01-01

    Objective Anorexia nervosa is a psychiatric disorder of unknown etiology. Understanding associations between behavior and neurobiology is important in treatment development. Using a novel monetary reward task during functional magnetic resonance brain imaging, the authors tested how brain reward learning in adolescent anorexia nervosa changes with weight restoration. Method Female adolescents with anorexia nervosa (N=21; mean age, 15.2 years [SD=2.4]) underwent functional MRI (fMRI) before and after treatment; similarly, healthy female control adolescents (N=21; mean age, 16.4 years [SD=1.9]) underwent fMRI on two occasions. Brain function was tested using the reward prediction error construct, a computational model for reward receipt and omission related to motivation and neural dopamine responsiveness. Results Compared with the control group, the anorexia nervosa group exhibited greater brain response 1) for prediction error regression within the caudate, ventral caudate/nucleus accumbens, and anterior and posterior insula, 2) to unexpected reward receipt in the anterior and posterior insula, and 3) to unexpected reward omission in the caudate body. Prediction error and unexpected reward omission response tended to normalize with treatment, while unexpected reward receipt response remained significantly elevated. Greater caudate prediction error response when underweight was associated with lower weight gain during treatment. Punishment sensitivity correlated positively with ventral caudate prediction error response. Conclusions Reward system responsiveness is elevated in adolescent anorexia nervosa when underweight and after weight restoration. Heightened prediction error activity in brain reward regions may represent a phenotype of adolescent anorexia nervosa that does not respond well to treatment. Prediction error response could be a neurobiological marker of illness severity that can indicate individual treatment needs. PMID:28231717

  2. Flow interference in a variable porosity trisonic wind tunnel.

    NASA Technical Reports Server (NTRS)

    Davis, J. W.; Graham, R. F.

    1972-01-01

    Pressure data from a 20-degree cone-cylinder in a variable porosity wind tunnel for the Mach range 0.2 to 5.0 are compared to an interference free standard in order to determine wall interference effects. Four 20-degree cone-cylinder models representing an approximate range of percent blockage from one to six were compared to curve-fits of the interference free standard at each Mach number and errors determined at each pressure tap location. The average of the absolute values of the percent error over the length of the model was determined and used as the criterion for evaluating model blockage interference effects. The results are presented in the form of the percent error as a function of model blockage and Mach number.

  3. Target/error overlap in jargonaphasia: The case for a one-source model, lexical and non-lexical summation, and the special status of correct responses.

    PubMed

    Olson, Andrew; Halloran, Elizabeth; Romani, Cristina

    2015-12-01

    We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Dose-Response Calculator for ArcGIS

    USGS Publications Warehouse

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  5. Multiple Cognitive Control Effects of Error Likelihood and Conflict

    PubMed Central

    Brown, Joshua W.

    2010-01-01

    Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873

  6. Calibration of a stack of NaI scintillators at the Berkeley Bevalac

    NASA Technical Reports Server (NTRS)

    Schindler, S. M.; Buffington, A.; Lau, K.; Rasmussen, I. L.

    1983-01-01

    An analysis of the carbon and argon data reveals that essentially all of the charge-changing fragmentation reactions within the stack can be identified and removed by imposing the simple criteria relating the observed energy deposition profiles to the expected Bragg curve depositions. It is noted that these criteria are even capable of identifying approximately one-third of the expected neutron-stripping interactions, which in these cases have anomalous deposition profiles. The contribution of mass error from uncertainty in delta E has an upper limit of 0.25 percent for Mn; this produces an associated mass error for the experiment of about 0.14 amu. It is believed that this uncertainty will change little with changing gamma. Residual errors in the mapping produce even smaller mass errors for lighter isotopes, whereas photoelectron fluctuations and delta-ray effects are approximately the same independent of the charge and energy deposition.

  7. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.

    2016-12-01

    It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.

  8. Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface

    PubMed Central

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467

  9. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    PubMed

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  10. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    NASA Astrophysics Data System (ADS)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  11. Cognitive Control Functions of Anterior Cingulate Cortex in Macaque Monkeys Performing a Wisconsin Card Sorting Test Analog

    PubMed Central

    Kuwabara, Masaru; Mansouri, Farshad A.; Buckley, Mark J.

    2014-01-01

    Monkeys were trained to select one of three targets by matching in color or matching in shape to a sample. Because the matching rule frequently changed and there were no cues for the currently relevant rule, monkeys had to maintain the relevant rule in working memory to select the correct target. We found that monkeys' error commission was not limited to the period after the rule change and occasionally occurred even after several consecutive correct trials, indicating that the task was cognitively demanding. In trials immediately after such error trials, monkeys' speed of selecting targets was slower. Additionally, in trials following consecutive correct trials, the monkeys' target selections for erroneous responses were slower than those for correct responses. We further found evidence for the involvement of the cortex in the anterior cingulate sulcus (ACCs) in these error-related behavioral modulations. First, ACCs cell activity differed between after-error and after-correct trials. In another group of ACCs cells, the activity differed depending on whether the monkeys were making a correct or erroneous decision in target selection. Second, bilateral ACCs lesions significantly abolished the response slowing both in after-error trials and in error trials. The error likelihood in after-error trials could be inferred by the error feedback in the previous trial, whereas the likelihood of erroneous responses after consecutive correct trials could be monitored only internally. These results suggest that ACCs represent both context-dependent and internally detected error likelihoods and promote modes of response selections in situations that involve these two types of error likelihood. PMID:24872558

  12. No evidence for an open vessel effect in centrifuge-based vulnerability curves of a long-vesselled liana (Vitis vinifera).

    PubMed

    Jacobsen, Anna L; Pratt, R Brandon

    2012-06-01

    Vulnerability to cavitation curves are used to estimate xylem cavitation resistance and can be constructed using multiple techniques. It was recently suggested that a technique that relies on centrifugal force to generate negative xylem pressures may be susceptible to an open vessel artifact in long-vesselled species. Here, we used custom centrifuge rotors to measure different sample lengths of 1-yr-old stems of grapevine to examine the influence of open vessels on vulnerability curves, thus testing the hypothesized open vessel artifact. These curves were compared with a dehydration-based vulnerability curve. Although samples differed significantly in the number of open vessels, there was no difference in the vulnerability to cavitation measured on 0.14- and 0.271-m-long samples of Vitis vinifera. Dehydration and centrifuge-based curves showed a similar pattern of declining xylem-specific hydraulic conductivity (K(s)) with declining water potential. The percentage loss in hydraulic conductivity (PLC) differed between dehydration and centrifuge curves and it was determined that grapevine is susceptible to errors in estimating maximum K(s) during dehydration because of the development of vessel blockages. Our results from a long-vesselled liana do not support the open vessel artifact hypothesis. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  13. [Study on quantitative model for suspended sediment concentration in Taihu Lake].

    PubMed

    Chen, Jun; Zhou, Guan-hua; Wen, Zhen-he; Ma, Jin-Feng; Zhang, Xu; Peng, Dan-qing; Yang, Song-lin

    2010-01-01

    The complicated compositions of Case II waters result in the complex properties of spectral curves. The present paper analyzed the in situ measurements data of spectral curves, and further realized the relationships between the properties of spectral curves and suspended sediment concentration. The study found that the max peak of spectral curves was moving to the direction of shortwavelength as increasing suspended sediment concentration, namely the blue shift of wavelength; the area enclosed by spectral curve and coordinate axis in the range of sensitive bands had preferably linear relationship with the suspended sediment concentration (curve area model); the trapezoidal area model which was an approximation of curve area model could also excellently reflect those relationships, and be greatly suitable for multi-spectral satellite imagery retrieval such as LandSat/TM, MODIS and so on. The inversion results of trapezoidal area model for LandSat/TM imagery on October 27, 2003 in Taihu Lake showed that the suspended sediment concentration ranged from 30 to 80 mg x L(-1), the distribution pattern was higher in the west, south and central lake and lower in the east lake; compared with the in situ measurements in the regions, and the relative error of retrieval model was 6.035%.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  15. Fieldable computer system for determining gamma-ray pulse-height distributions, flux spectra, and dose rates from Little Boy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, C.E.; Lucas, M.C.; Tisinger, E.W.

    1984-01-01

    Our system consists of a LeCroy 3500 data acquisition system with a built-in CAMAC crate and eight bismuth-germanate detectors 7.62 cm in diameter and 7.62 cm long. Gamma-ray pulse-height distributions are acquired simultaneously for up to eight positions. The system was very carefully calibrated and characterized from 0.1 to 8.3 MeV using gamma-ray spectra from a variety of radioactive sources. By fitting the pulse-height distributions from the sources with a function containing 17 parameters, we determined theoretical repsonse functions. We use these response functions to unfold the distributions to obtain flux spectra. A flux-to-dose-rate conversion curve based on the workmore » of Dimbylow and Francis is then used to obtain dose rates. Direct use of measured spectra and flux-to-dose-rate curves to obtain dose rates avoids the errors that can arise from spectrum dependence in simple gamma-ray dosimeter instruments. We present some gamma-ray doses for the Little Boy assembly operated at low power. These results can be used to determine the exposures of the Hiroshima survivors and thus aid in the establishment of radation exposure limits for the nuclear industry.« less

  16. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    USGS Publications Warehouse

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  17. Job Strain and the Cortisol Diurnal Cycle in MESA: Accounting for Between- and Within-Day Variability

    PubMed Central

    Rudolph, Kara E.; Sánchez, Brisa N.; Stuart, Elizabeth A.; Greenberg, Benjamin; Fujishiro, Kaori; Wand, Gary S.; Shrager, Sandi; Seeman, Teresa; Diez Roux, Ana V.; Golden, Sherita H.

    2016-01-01

    Evidence of the link between job strain and cortisol levels has been inconsistent. This could be due to failure to account for cortisol variability leading to underestimated standard errors. Our objective was to model the relationship between job strain and the whole cortisol curve, accounting for sources of cortisol variability. Our functional mixed-model approach incorporated all available data—18 samples over 3 days—and uncertainty in estimated relationships. We used employed participants from the Multi-Ethnic Study of Atherosclerosis Stress I Study and data collected between 2002 and 2006. We used propensity score matching on an extensive set of variables to control for sources of confounding. We found that job strain was associated with lower salivary cortisol levels and lower total area under the curve. We found no relationship between job strain and the cortisol awakening response. Our findings differed from those of several previous studies. It is plausible that our results were unique to middle- to older-aged racially, ethnically, and occupationally diverse adults and were therefore not inconsistent with previous research among younger, mostly white samples. However, it is also plausible that previous findings were influenced by residual confounding and failure to propagate uncertainty (i.e., account for the multiple sources of variability) in estimating cortisol features. PMID:26905339

  18. Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.

    PubMed

    VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T

    2017-06-01

    The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.

  19. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.

  20. Photoelectric photometry of the RS CVn binary EI Eridani = HD 26337

    NASA Technical Reports Server (NTRS)

    Hooten, J. T.; Strassmeier, K. G.; Hall, D. S.; Barksdale, W. S., Jr.; Bertoglio, A.

    1989-01-01

    Differential UBV(RI)sub KC and UBVRI photometry of the RS CVn binary EI Eridani obtained during December 1987 and January 1988 at fourteen different observatories is presented. A combined visual bandpass light curve, corrected for systematic errors of different observatories, utilizes the photometric period of 1,945 days to produce useful results. The analysis shows the visual light curve to have twin maxima, separated by about 0.4 phase, and a full amplitude of approximately 0.06 mag for the period of observation, a smaller amplitude than reported in the past. The decrease in amplitude may be due to a decrease or homogenization of spot coverage. To fit the asymmetrical light curve, a starspot model would have to employ at least two spotted regions separated in longitude.

  1. Blazhko Effect

    NASA Technical Reports Server (NTRS)

    Teays, Terry

    1996-01-01

    The cause of the Blazhko effect, the long-term modulation of the light and radial velocity curves of some RR Lyr stars, is still not understood. The observational characteristics of the Blazhko effect are discussed. Some preliminary results are presented from two recent campaigns to observe RR Lyr, using the International Ultraviolet Explorer along with ground-based spectroscopy and photometry, throughout a pulsation cycle, at a variety of Blazhko phases. A set of ultraviolet light curves have been generated from low dispersion IUE spectra. In addition, the (visual) light curves from IUE's Fine Error Sensor are analyzed using the Fourier decomposition technique. The values of the parameters Psi(sub 21) and R(sub 21) at different Blazhko phases of RR Lyr span the range of values found for non-Blazhko variables of similar period.

  2. Construction of dose response calibration curves for dicentrics and micronuclei for X radiation in a Serbian population.

    PubMed

    Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A

    2014-10-01

    Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  4. First Job Search of Residents in the United States: A Survey of Anesthesiology Trainees' Interest in Academic Positions in Cities Distant from Previous Residences.

    PubMed

    Dexter, Franklin; De Oliveira, Gildasio S; McCarthy, Robert J

    2016-01-15

    We surveyed anesthesiology residents to evaluate the predictive effect of prior residence on desired location for future practice opportunities. One thousand five hundred United States anesthesiology residents were invited to participate. One question asked whether they intend to enter academic practice when they graduate from their residency/fellowship training. The analysis categorized the responses into "surely yes" and "probably" versus "even," "probably not," and "surely no." "After finishing your residency/fellowship training, are you planning to look seriously (e.g., interview) at jobs located more than a 2-hour drive from a location where you or your family (e.g., spouse or partner/significant other) have lived previously?" Responses were categorized into "very probably" and "somewhat probably" versus "somewhat improbably" and "not probable." Other questions explored predictors of the relationships quantified using the area under the receiver operating characteristic curve (area under the curve) ± its standard error. Among the 696 respondents, 36.9% (N = 256) would "probably" consider an academic practice. Fewer than half of those (P < 0.0001) would "very probably" consider a distant location (31.6%, 99% CI 24.4%-39.6%). Respondents with prior formal research training (e.g., PhD or Master's) had greater interest in academic practice at a distant location (AUC 0.63 ± 0.03, P = 0.0002). Except among respondents with formal research training, a good question to ask a job applicant is whether the applicant or the applicant's family has previously lived in the area.

  5. Nicotine-induced activation of caudate and anterior cingulate cortex in response to errors in schizophrenia.

    PubMed

    Moran, Lauren V; Stoeckel, Luke E; Wang, Kristina; Caine, Carolyn E; Villafuerte, Rosemond; Calderon, Vanessa; Baker, Justin T; Ongur, Dost; Janes, Amy C; Evins, A Eden; Pizzagalli, Diego A

    2018-03-01

    Nicotine improves attention and processing speed in individuals with schizophrenia. Few studies have investigated the effects of nicotine on cognitive control. Prior functional magnetic resonance imaging (fMRI) research demonstrates blunted activation of dorsal anterior cingulate cortex (dACC) and rostral anterior cingulate cortex (rACC) in response to error and decreased post-error slowing in schizophrenia. Participants with schizophrenia (n = 13) and healthy controls (n = 12) participated in a randomized, placebo-controlled, crossover study of the effects of transdermal nicotine on cognitive control. For each drug condition, participants underwent fMRI while performing the stop signal task where participants attempt to inhibit prepotent responses to "go (motor activation)" signals when an occasional "stop (motor inhibition)" signal appears. Error processing was evaluated by comparing "stop error" trials (failed response inhibition) to "go" trials. Resting-state fMRI data were collected prior to the task. Participants with schizophrenia had increased nicotine-induced activation of right caudate in response to errors compared to controls (DRUG × GROUP effect: p corrected  < 0.05). Both groups had significant nicotine-induced activation of dACC and rACC in response to errors. Using right caudate activation to errors as a seed for resting-state functional connectivity analysis, relative to controls, participants with schizophrenia had significantly decreased connectivity between the right caudate and dACC/bilateral dorsolateral prefrontal cortices. In sum, we replicated prior findings of decreased post-error slowing in schizophrenia and found that nicotine was associated with more adaptive (i.e., increased) post-error reaction time (RT). This proof-of-concept pilot study suggests a role for nicotinic agents in targeting cognitive control deficits in schizophrenia.

  6. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  7. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  8. Intranasal Pharmacokinetic Data for Triptans Such as Sumatriptan and Zolmitriptan Can Render Area Under the Curve (AUC) Predictions for the Oral Route: Strategy Development and Application.

    PubMed

    Srinivas, Nuggehally R; Syed, Muzeeb

    2016-01-01

    Limited pharmacokinetic sampling strategy may be useful for predicting the area under the curve (AUC) for triptans and may have clinical utility as a prospective tool for prediction. Using appropriate intranasal pharmacokinetic data, a Cmax vs. AUC relationship was established by linear regression models for sumatriptan and zolmitriptan. The predictions of the AUC values were performed using published mean/median Cmax data and appropriate regression lines. The quotient of observed and predicted values rendered fold-difference calculation. The mean absolute error (MAE), mean positive error (MPE), mean negative error (MNE), root mean square error (RMSE), correlation coefficient (r), and the goodness of the AUC fold prediction were used to evaluate the two triptans. Also, data from the mean concentration profiles at time points of 1 hour (sumatriptan) and 3 hours (zolmitriptan) were used for the AUC prediction. The Cmax vs. AUC models displayed excellent correlation for both sumatriptan (r = .9997; P < .001) and zolmitriptan (r = .9999; P < .001). Irrespective of the two triptans, the majority of the predicted AUCs (83%-85%) were within 0.76-1.25-fold difference using the regression model. The prediction of AUC values for sumatriptan or zolmitriptan using the concentration data that reflected the Tmax occurrence were in the proximity of the reported values. In summary, the Cmax vs. AUC models exhibited strong correlations for sumatriptan and zolmitriptan. The usefulness of the prediction of the AUC values was established by a rigorous statistical approach.

  9. Endodontic complications of root canal therapy performed by dental students with stainless-steel K-files and nickel-titanium hand files.

    PubMed

    Pettiette, M T; Metzger, Z; Phillips, C; Trope, M

    1999-04-01

    Straightening of curved canals is one of the most common procedural errors in endodontic instrumentation. This problem is commonly encountered when dental students perform molar endodontics. The purpose of this study was to compare the effect of the type of instrument used by these students on the extent of straightening and on the incidence of other endodontic procedural errors. Nickel-titanium 0.02 taper hand files were compared with traditional stainless-steel 0.02 taper K-files. Sixty molar teeth comprised of maxillary and mandibular first and second molars were treated by senior dental students. Instrumentation was with either nickel-titanium hand files or stainless-steel K-files. Preoperative and postoperative radiographs of each tooth were taken using an XCP precision instrument with a customized bite block to ensure accurate reproduction of radiographic angulation. The radiographs were scanned and the images stored as TIFF files. By superimposing tracings from the preoperative over the postoperative radiographs, the degree of deviation of the apical third of the root canal filling from the original canal was measured. The presence of other errors, such as strip perforation and instrument breakage, was established by examining the radiographs. In curved canals instrumented by stainless-steel K-files, the average deviation of the apical third of the canals was 14.44 degrees (+/- 10.33 degrees). The deviation was significantly reduced when nickel-titanium hand files were used to an average of 4.39 degrees (+/- 4.53 degrees). The incidence of other procedural errors was also significantly reduced by the use of nickel-titanium hand files.

  10. Directional control-response compatibility relationships assessed by physical simulation of an underground bolting machine.

    PubMed

    Steiner, Lisa; Burgess-Limerick, Robin; Porter, William

    2014-03-01

    The authors examine the pattern of direction errors made during the manipulation of a physical simulation of an underground coal mine bolting machine to assess the directional control-response compatibility relationships associated with the device and to compare these results to data obtained from a virtual simulation of a generic device. Directional errors during the manual control of underground coal roof bolting equipment are associated with serious injuries. Directional control-response relationships have previously been examined using a virtual simulation of a generic device; however, the applicability of these results to a specific physical device may be questioned. Forty-eight participants randomly assigned to different directional control-response relationships manipulated horizontal or vertical control levers to move a simulated bolter arm in three directions (elevation, slew, and sump) as well as to cause a light to become illuminated and raise or lower a stabilizing jack. Directional errors were recorded during the completion of 240 trials by each participant Directional error rates are increased when the control and response are in opposite directions or if the direction of the control and response are perpendicular.The pattern of direction error rates was consistent with experiments obtained from a generic device in a virtual environment. Error rates are increased by incompatible directional control-response relationships. Ensuring that the design of equipment controls maintains compatible directional control-response relationships has potential to reduce the errors made in high-risk situations, such as underground coal mining.

  11. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  12. Determination of injection molding process windows for optical lenses using response surface methodology.

    PubMed

    Tsai, Kuo-Ming; Wang, He-Yi

    2014-08-20

    This study focuses on injection molding process window determination for obtaining optimal imaging optical properties, astigmatism, coma, and spherical aberration using plastic lenses. The Taguchi experimental method was first used to identify the optimized combination of parameters and significant factors affecting the imaging optical properties of the lens. Full factorial experiments were then implemented based on the significant factors to build the response surface models. The injection molding process windows for lenses with optimized optical properties were determined based on the surface models, and confirmation experiments were performed to verify their validity. The results indicated that the significant factors affecting the optical properties of lenses are mold temperature, melt temperature, and cooling time. According to experimental data for the significant factors, the oblique ovals for different optical properties on the injection molding process windows based on melt temperature and cooling time can be obtained using the curve fitting approach. The confirmation experiments revealed that the average errors for astigmatism, coma, and spherical aberration are 3.44%, 5.62%, and 5.69%, respectively. The results indicated that the process windows proposed are highly reliable.

  13. Automatisierung des Verfahrens nach Beyer & Schweiger (1969) zur Bestimmung von Durchlässigkeit und Porosität aus Kornverteilungskurven

    NASA Astrophysics Data System (ADS)

    Houben, Georg J.; Blümel, Martin

    2017-11-01

    Porosity is a fundamental parameter in hydrogeology. The empirical method of Beyer and Schweiger (1969) allows the calculation of hydraulic conductivity and both the total and effective porosity from granulometric data. However, due to its graphical nature with type curves, it is tedious to apply and prone to reading errors. In this work, the type curves were digitized and emulated by mathematical functions. The latter were implemented into a spreadsheet and a visual basic program, allowing the fast automated application of the method for any number of samples.

  14. Quantitative assessment of responses of the eyeball based on data from the Corvis tonometer.

    PubMed

    Koprowski, Robert; Wilczyński, Sławomir; Nowinska, Anna; Lyssek-Boron, Anita; Teper, Sławomir; Wylegala, Edward; Wróbel, Zygmunt

    2015-03-01

    The "air-puff" tonometers, include the Corvis, are a type of device for measuring intraocular pressure and biomechanics parameters. The paper attempts to analyse this response and its relationship with other parameters measured in the Corvis tonometer. A number of 13,400 2D images were acquired from the Corvis device and analysed (32 healthy and 16 ill people). A new method has been proposed for the analysis of responses of the eyeball based on morphological transformations and contextual operations. The proposed algorithm enables to determine responses of the eyeball to an air puff coming from the Corvis tonometer. Additionally, responses of the eyeball have been linked to some selected features of corneal deformation. The results include, among others: (1) distinguishability between the left and right eye with an error of 7%; (2) the correlation between the area under the curve in corneal deformation and the response of the eyeball -0.26; (3) the correlation between the highest concavity time and the maximum deformation amplitude of 0.4. All these features are obtained fully automatically and repetitively at a time of 3.8s per patient (Core i7 10GB RAM). It is possible to measure additional parameters of the eye deformation which are not available in the original software of the Corvis tonometer. The use of the proposed methods of image analysis and processing provides results directly from the eye response measurement when measuring intraocular pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Monetary Incentives in Speeded Perceptual Decision: Effects of Penalizing Errors Versus Slow Responses

    PubMed Central

    Dambacher, Michael; Hübner, Ronald; Schlösser, Jan

    2011-01-01

    The influence of monetary incentives on performance has been widely investigated among various disciplines. While the results reveal positive incentive effects only under specific conditions, the exact nature, and the contribution of mediating factors are largely unexplored. The present study examined influences of payoff schemes as one of these factors. In particular, we manipulated penalties for errors and slow responses in a speeded categorization task. The data show improved performance for monetary over symbolic incentives when (a) penalties are higher for slow responses than for errors, and (b) neither slow responses nor errors are punished. Conversely, payoff schemes with stronger punishment for errors than for slow responses resulted in worse performance under monetary incentives. The findings suggest that an emphasis of speed is favorable for positive influences of monetary incentives, whereas an emphasis of accuracy under time pressure has the opposite effect. PMID:21980316

  16. Response analysis of curved bridge with unseating failure control system under near-fault ground motions

    NASA Astrophysics Data System (ADS)

    Zuo, Ye; Sun, Guangjun; Li, Hongjing

    2018-01-01

    Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.

  17. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  18. Dissociating response conflict and error likelihood in anterior cingulate cortex.

    PubMed

    Yeung, Nick; Nieuwenhuis, Sander

    2009-11-18

    Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.

  19. A Graphical Approach to Item Analysis. Research Report. ETS RR-04-10

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Dorans, Neil J.

    2004-01-01

    This paper describes an approach to item analysis that is based on the estimation of a set of response curves for each item. The response curves show, at a glance, the difficulty and the discriminating power of the item and the popularity of each distractor, at any level of the criterion variable (e.g., total score). The curves are estimated by…

  20. Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah

    2017-04-01

    Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.

  1. Metronidazole and 5-aminosalicylic acid enhance the contractile activity of histaminergic agonists on the guinea-pig isolated ileum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winbery, S.L.; Barker, L.A.

    1986-03-01

    The effects of metronidazole and 5-aminosalicylic acid (5-ASA) on histamine receptor-effector systems in the small intestine and right atrium of the guinea pig were studied. In an apparently all-or-none manner, both caused a sinistral shift in dose-response curves for the phasic component of the contractile response to histamine at H1 receptors on the ileum. In the presence of either, the EC50 value for histamine was reduced from 0.07 to about 0.03 microM. Similarly, in an apparently all-or-none fashion, both produced an elevation in the dose-response curve for the actions of dimaprit at H2-receptors in the ileum; the response to allmore » doses was increased about 30% with no significant change in the EC50 value. Metronidazole and 5-ASA did not alter dose-response curves for the tonic contractile response to histamine or curves generated by the cumulative addition of histamine. Also, neither altered the positive chronotropic response on isolated right atria or the phasic contractile response on isolated segments of jejunum and duodenum to histamine or dimaprit. Likewise, neither altered dose-response curves for the direct action of carbamylcholine at muscarinic receptors or for the indirect actions of dimethylphenylpiperazinium on the ileum. The effects of 5-ASA or metronidazole on the response to histamine could be prevented as well as reversed by scopolamine or tetrodotoxin. The results suggest that metronidazole and 5-ASA enhance the actions of histamine and dimaprit on the ileum by an action on myenteric plexus neurons.« less

  2. Segmentation of neuronal structures using SARSA (λ)-based boundary amendment with reinforced gradient-descent curve shape fitting.

    PubMed

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.

  3. Segmentation of Neuronal Structures Using SARSA (λ)-Based Boundary Amendment with Reinforced Gradient-Descent Curve Shape Fitting

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699

  4. Identifying Blocks Formed by Curbed Fractures Using Exact Arithmetic

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Xia, L.; Yu, Q.; Zhang, X.

    2015-12-01

    Identifying blocks formed by fractures is important in rock engineering. Most studies assume the fractures to be perfect planar whereas curved fractures are rarely considered. However, large fractures observed in the field are often curved. This paper presents a new method for identifying rock blocks formed by both curved and planar fractures based on the element-block-assembling approach. The curved and planar fractures are represented as triangle meshes and planar discs, respectively. In the beginning of the identification method, the intersection segments between different triangle meshes are calculated and the intersected triangles are re-meshed to construct a piecewise linear complex (PLC). Then, the modeling domain is divided into tetrahedral subdomains under the constraint of the PLC and these subdomains are further decomposed into element blocks by extended planar fractures. Finally, the element blocks are combined and the subdomains are assembled to form complex blocks. The combination of two subdomains is skipped if and only if the common facet lies on a curved fracture. In this study, the exact arithmetic is used to handle the computational errors, which may threat the robustness of the block identification program when the degenerated cases are encountered. Specifically, a real number is represented as the ratio between two integers and the basic arithmetic such as addition, subtraction, multiplication and division between different real numbers can be performed exactly if an arbitrary precision integer package is used. In this way, the exact construction of blocks can be achieved without introducing computational errors. Several analytical examples are given in this paper and the results show effectiveness of this method in handling arbitrary shaped blocks. Moreover, there is no limitation on the number of blocks in a block system. The results also show (suggest) that the degenerated cases can be handled without affecting the robustness of the identification program.

  5. Identifying Children at Risk of High Myopia Using Population Centile Curves of Refraction.

    PubMed

    Chen, Yanxian; Zhang, Jian; Morgan, Ian G; He, Mingguang

    2016-01-01

    To construct reference centile curves of refraction based on population-based data as an age-specific severity scale to evaluate their efficacy as a tool for identifying children at risk of developing high myopia in a longitudinal study. Data of 4218 children aged 5-15 years from the Guangzhou Refractive Error Study in Children (RESC) study, and 354 first-born twins from the Guangzhou Twin Eye Study (GTES) with annual visit were included in the analysis. Reference centile curves for refraction were constructed using a quantile regression model based on the cycloplegic refraction data from the RESC. The risk of developing high myopia (spherical equivalent ≤ -6 diopters [D]) was evaluated as a diagnostic test using the twin follow-up data. The centile curves suggested that the 3rd, 5th, and 10th percentile decreased from -0.25 D, 0.00 D and 0.25 D in 5 year-olds to -6.00 D, -5.65D and -4.63 D in 15 year-olds in the population-based data from RESC. In the GTES cohort, the 5th centile showed the most effective diagnostic value with a sensitivity of 92.9%, a specificity of 97.9% and a positive predictive value (PPV) of 65.0% in predicting high myopia onset (≤-6.00D) before the age of 15 years. The PPV was highest (87.5%) in 3rd centile but with only 50.0% sensitivity. The Mathew's correlation coefficient of 5th centile in predicting myopia of -6.0D/-5.0D/-4.0D by age of 15 was 0.77/0.51/0.30 respectively. Reference centile curves provide an age-specific estimation on a severity scale of refractive error in school-aged children. Children located under lower percentiles at young age were more likely to have high myopia at 15 years or probably in adulthood.

  6. SU-E-J-164: Estimation of DVH Variation for PTV Due to Interfraction Organ Motion in Prostate VMAT Using Gaussian Error Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, C; Jiang, R; Chow, J

    2015-06-15

    Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describingmore » the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.« less

  7. Disclosure of Medical Errors: What Factors Influence How Patients Respond?

    PubMed Central

    Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H

    2006-01-01

    BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770

  8. Light curves of 213 Type Ia supernovae from the Essence survey

    DOE PAGES

    Narayan, G.; Rest, A.; Tucker, B. E.; ...

    2016-05-06

    The ESSENCE survey discovered 213 Type Ia supernovae at redshiftsmore » $$0.1\\lt z\\lt 0.81$$ between 2002 and 2008. We present their R- and I-band photometry, measured from images obtained using the MOSAIC II camera at the CTIO Blanco, along with rapid-response spectroscopy for each object. We use our spectroscopic follow-up observations to determine an accurate, quantitative classification, and precise redshift. Through an extensive calibration program we have improved the precision of the CTIO Blanco natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. Here, we assess the effect of various potential sources of systematic bias on our measured fluxes, and estimate the dominant term in the systematic error budget from the photometric calibration on our absolute fluxes is ~1%.« less

  9. Light curves of 213 Type Ia supernovae from the Essence survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, G.; Rest, A.; Tucker, B. E.

    The ESSENCE survey discovered 213 Type Ia supernovae at redshiftsmore » $$0.1\\lt z\\lt 0.81$$ between 2002 and 2008. We present their R- and I-band photometry, measured from images obtained using the MOSAIC II camera at the CTIO Blanco, along with rapid-response spectroscopy for each object. We use our spectroscopic follow-up observations to determine an accurate, quantitative classification, and precise redshift. Through an extensive calibration program we have improved the precision of the CTIO Blanco natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. Here, we assess the effect of various potential sources of systematic bias on our measured fluxes, and estimate the dominant term in the systematic error budget from the photometric calibration on our absolute fluxes is ~1%.« less

  10. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  11. Development of a directivity-controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2008-04-01

    Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.

  12. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  13. Performance Evaluation of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  14. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  15. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  16. batman: BAsic Transit Model cAlculatioN in Python

    NASA Astrophysics Data System (ADS)

    Kreidberg, Laura

    2015-11-01

    I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .

  17. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.

  18. Learning from Mistakes

    PubMed Central

    Fischer, Melissa A; Mazor, Kathleen M; Baril, Joann; Alper, Eric; DeMarco, Deborah; Pugnaire, Michele

    2006-01-01

    CONTEXT Trainees are exposed to medical errors throughout medical school and residency. Little is known about what facilitates and limits learning from these experiences. OBJECTIVE To identify major factors and areas of tension in trainees' learning from medical errors. DESIGN, SETTING, AND PARTICIPANTS Structured telephone interviews with 59 trainees (medical students and residents) from 1 academic medical center. Five authors reviewed transcripts of audiotaped interviews using content analysis. RESULTS Trainees were aware that medical errors occur from early in medical school. Many had an intense emotional response to the idea of committing errors in patient care. Students and residents noted variation and conflict in institutional recommendations and individual actions. Many expressed role confusion regarding whether and how to initiate discussion after errors occurred. Some noted the conflict between reporting errors to seniors who were responsible for their evaluation. Learners requested more open discussion of actual errors and faculty disclosure. No students or residents felt that they learned better from near misses than from actual errors, and many believed that they learned the most when harm was caused. CONCLUSIONS Trainees are aware of medical errors, but remaining tensions may limit learning. Institutions can immediately address variability in faculty response and local culture by disseminating clear, accessible algorithms to guide behavior when errors occur. Educators should develop longitudinal curricula that integrate actual cases and faculty disclosure. Future multi-institutional work should focus on identified themes such as teaching and learning in emotionally charged situations, learning from errors and near misses and balance between individual and systems responsibility. PMID:16704381

  19. Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine

    NASA Astrophysics Data System (ADS)

    Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo

    2017-07-01

    Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.

  20. Analysis of fast and slow responses in AC conductance curves for p-type SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Karamoto, Yuki; Zhang, Xufang; Okamoto, Dai; Sometani, Mitsuru; Hatakeyama, Tetsuo; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    We used a conductance method to investigate the interface characteristics of a SiO2/p-type 4H-SiC MOS structure fabricated by dry oxidation. It was found that the measured equivalent parallel conductance–frequency (G p/ω–f) curves were not symmetric, showing that there existed both high- and low-frequency signals. We attributed high-frequency responses to fast interface states and low-frequency responses to near-interface oxide traps. To analyze the fast interface states, Nicollian’s standard conductance method was applied in the high-frequency range. By extracting the high-frequency responses from the measured G p/ω–f curves, the characteristics of the low-frequency responses were reproduced by Cooper’s model, which considers the effect of near-interface traps on the G p/ω–f curves. The corresponding density distribution of slow traps as a function of energy level was estimated.

  1. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  2. Development of a Dynamic Biomechanical Model for Load Carriage: Phase 4, Part C2: Assessment of Pressure Measurement Systems on Curved Surfaces for the Dynamic Biomechanical Model of Human Load Carriage

    DTIC Science & Technology

    2005-08-01

    excellente justesse comparativement au F-Scan® pendant les essais sur le modèle de hanche. Les deux systèmes présentaient un certain degré de variation...in appendix C. The experimental design consisted of three steps (See Figure 1). Two were undertaken using a physical model for the shoulder in order...increase in accuracy error compared to Table 1 suggests that the current software for the XSENSOR® system is not designed to compensate for errors

  3. Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem

    NASA Astrophysics Data System (ADS)

    Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad

    2013-12-01

    In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.

  4. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed elevation Z0is systematically well identified with relative errors on the order of a few %. Eventually, these altimetry-based rating curves provide morphological parameters of river reaches that can be used as inputs into hydraulic models and a priori information that could be useful for SWOT inversion algorithms.

  5. Quantifying the safety effects of horizontal curves on two-way, two-lane rural roads.

    PubMed

    Gooch, Jeffrey P; Gayah, Vikash V; Donnell, Eric T

    2016-07-01

    The objective of this study is to quantify the safety performance of horizontal curves on two-way, two-lane rural roads relative to tangent segments. Past research is limited by small samples sizes, outdated statistical evaluation methods, and unreported standard errors. This study overcomes these drawbacks by using the propensity scores-potential outcomes framework. The impact of adjacent curves on horizontal curve safety is also explored using a cross-sectional regression model of only horizontal curves. The models estimated in the present study used eight years of crash data (2005-2012) obtained from over 10,000 miles of state-owned two-lane rural roads in Pennsylvania. These data included information on roadway geometry (e.g., horizontal curvature, lane width, and shoulder width), traffic volume, roadside hazard rating, and the presence of various low-cost safety countermeasures (e.g., centerline and shoulder rumble strips, curve and intersection warning pavement markings, and aggressive driving pavement dots). Crash prediction is performed by means of mixed effects negative binomial regression using the explanatory variables noted previously, as well as attributes of adjacent horizontal curves. The results indicate that both the presence of a horizontal curve and its degree of curvature must be considered when predicting the frequency of total crashes on horizontal curves. Both are associated with an increase in crash frequency, which is consistent with previous findings in the literature. Mixed effects negative binomial regression models for total crash frequency on horizontal curves indicate that the distance to adjacent curves is not statistically significant. However, the degree of curvature of adjacent curves in close proximity (within 0.75 miles) was found to be statistically significant and negatively correlated with crash frequency on the subject curve. This is logical, as drivers exiting a sharp curve are likely to be driving slower and with more awareness as they approach the next horizontal curve. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Dopamine prediction error responses integrate subjective value from different reward dimensions

    PubMed Central

    Lak, Armin; Stauffer, William R.; Schultz, Wolfram

    2014-01-01

    Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose “as if” they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values. PMID:24453218

  7. Space shuttle post-entry and landing analysis. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Crawford, B. S.; Duiven, E. M.

    1973-01-01

    Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.

  8. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  9. NONMONOTONIC DOSE RESPONSE CURVES (NMDRCS) ARE COMMON AFTER ESTROGEN OR ANDROGEN SIGNALING PATHWAY DISRUPTION. FACT OR FALDERAL?

    EPA Science Inventory

    ABSTRACT BODY: The shape of the dose response curve in the low dose region has been debated since the 1940s, originally focusing on linear no threshold (LNT) versus threshold responses for cancer and noncancer effects. Recently, it has been claimed that endocrine disrupters (EDCs...

  10. A theory for how sensorimotor skills are learned and retained in noisy and nonstationary neural circuits

    PubMed Central

    Ajemian, Robert; D’Ausilio, Alessandro; Moorman, Helene; Bizzi, Emilio

    2013-01-01

    During the process of skill learning, synaptic connections in our brains are modified to form motor memories of learned sensorimotor acts. The more plastic the adult brain is, the easier it is to learn new skills or adapt to neurological injury. However, if the brain is too plastic and the pattern of synaptic connectivity is constantly changing, new memories will overwrite old memories, and learning becomes unstable. This trade-off is known as the stability–plasticity dilemma. Here a theory of sensorimotor learning and memory is developed whereby synaptic strengths are perpetually fluctuating without causing instability in motor memory recall, as long as the underlying neural networks are sufficiently noisy and massively redundant. The theory implies two distinct stages of learning—preasymptotic and postasymptotic—because once the error drops to a level comparable to that of the noise-induced error, further error reduction requires altered network dynamics. A key behavioral prediction derived from this analysis is tested in a visuomotor adaptation experiment, and the resultant learning curves are modeled with a nonstationary neural network. Next, the theory is used to model two-photon microscopy data that show, in animals, high rates of dendritic spine turnover, even in the absence of overt behavioral learning. Finally, the theory predicts enhanced task selectivity in the responses of individual motor cortical neurons as the level of task expertise increases. From these considerations, a unique interpretation of sensorimotor memory is proposed—memories are defined not by fixed patterns of synaptic weights but, rather, by nonstationary synaptic patterns that fluctuate coherently. PMID:24324147

  11. Piezocomposite Actuator Arrays for Correcting and Controlling Wavefront Error in Reflectors

    NASA Technical Reports Server (NTRS)

    Bradford, Samuel Case; Peterson, Lee D.; Ohara, Catherine M.; Shi, Fang; Agnes, Greg S.; Hoffman, Samuel M.; Wilkie, William Keats

    2012-01-01

    Three reflectors have been developed and tested to assess the performance of a distributed network of piezocomposite actuators for correcting thermal deformations and total wave-front error. The primary testbed article is an active composite reflector, composed of a spherically curved panel with a graphite face sheet and aluminum honeycomb core composite, and then augmented with a network of 90 distributed piezoelectric composite actuators. The piezoelectric actuator system may be used for correcting as-built residual shape errors, and for controlling low-order, thermally-induced quasi-static distortions of the panel. In this study, thermally-induced surface deformations of 1 to 5 microns were deliberately introduced onto the reflector, then measured using a speckle holography interferometer system. The reflector surface figure was subsequently corrected to a tolerance of 50 nm using the actuators embedded in the reflector's back face sheet. Two additional test articles were constructed: a borosilicate at window at 150 mm diameter with 18 actuators bonded to the back surface; and a direct metal laser sintered reflector with spherical curvature, 230 mm diameter, and 12 actuators bonded to the back surface. In the case of the glass reflector, absolute measurements were performed with an interferometer and the absolute surface was corrected. These test articles were evaluated to determine their absolute surface control capabilities, as well as to assess a multiphysics modeling effort developed under this program for the prediction of active reflector response. This paper will describe the design, construction, and testing of active reflector systems under thermal loads, and subsequent correction of surface shape via distributed peizeoelctric actuation.

  12. Cross-Calibration between ASTER and MODIS Visible to Near-Infrared Bands for Improvement of ASTER Radiometric Calibration

    PubMed Central

    Tsuchida, Satoshi; Thome, Kurtis

    2017-01-01

    Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329

  13. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  14. Influence of basis-set size on the X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B Σ 1 /2 +2 potential-energy curves, A Π 3 /2 2 vibrational energies, and D1 and D2 line shapes of Rb+He

    NASA Astrophysics Data System (ADS)

    Blank, L. Aaron; Sharma, Amit R.; Weeks, David E.

    2018-03-01

    The X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves for Rb+He are computed at the spin-orbit multireference configuration interaction level of theory using a hierarchy of Gaussian basis sets at the double-zeta (DZ), triple-zeta (TZ), and quadruple-zeta (QZ) levels of valence quality. Counterpoise and Davidson-Silver corrections are employed to remove basis-set superposition error and ameliorate size-consistency error. An extrapolation is performed to obtain a final set of potential-energy curves in the complete basis-set (CBS) limit. This yields four sets of systematically improved X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves that are used to compute the A Π 3 /2 2 bound vibrational energies, the position of the D2 blue satellite peak, and the D1 and D2 pressure broadening and shifting coefficients, at the DZ, TZ, QZ, and CBS levels. Results are compared with previous calculations and experimental observation.

  15. Assessing the performance of handheld glucose testing for critical care.

    PubMed

    Kost, Gerald J; Tran, Nam K; Louie, Richard F; Gentile, Nicole L; Abad, Victor J

    2008-12-01

    We assessed the performance of a point-of-care (POC) glucose meter system (GMS) with multitasking test strip by using the locally-smoothed (LS) median absolute difference (MAD) curve method in conjunction with a modified Bland-Altman difference plot and superimposed International Organization for Standardization (ISO) 15197 tolerance bands. We analyzed performance for tight glycemic control (TGC). A modified glucose oxidase enzyme with a multilayer-gold, multielectrode, four-well test strip (StatStriptrade mark, NOVA Biomedical, Waltham, MA) was used. There was no test strip calibration code. Pragmatic comparison was done of GMS results versus paired plasma glucose measurements from chemistry analyzers in clinical laboratories. Venous samples (n = 1,703) were analyzed at 35 hospitals that used 20 types of chemistry analyzers. Erroneous results were identified using the Bland-Altman plot and ISO 15197 criteria. Discrepant values were analyzed for the TGC interval of 80-110 mg/dL. The GMS met ISO 15197 guidelines; 98.6% (410 of 416) of observations were within tolerance for glucose <75 mg/dL, and for > or =75 mg/dL, 100% were within tolerance. Paired differences (handheld minus reference) averaged -2.2 (SD 9.8) mg/dL; the median was -1 (range, -96 to 45) mg/dL. LS MAD curve analysis revealed satisfactory performance below 186 mg/dL; above 186 mg/dL, the recommended error tolerance limit (5 mg/dL) was not met. No discrepant values appeared. All points fell in Clarke Error Grid zone A. Linear regression showed y = 1.018x - 0.716 mg/dL, and r2 = 0.995. LS MAD curves draw on human ability to discriminate performance visually. LS MAD curve and ISO 15197 performance were acceptable for TGC. POC and reference glucose calibration should be harmonized and standardized.

  16. Simulation of an automatically-controlled STOL aircraft in a microwave landing system multipath environment

    NASA Technical Reports Server (NTRS)

    Toda, M.; Brown, S. C.; Burrous, C. N.

    1976-01-01

    The simulated response is described of a STOL aircraft to Microwave Landing System (MLS) multipath errors during final approach and touchdown. The MLS azimuth, elevation, and DME multipath errors were computed for a relatively severe multipath environment at Crissy Field California, utilizing an MLS multipath simulation at MIT Lincoln Laboratory. A NASA/Ames six-degree-of-freedom simulation of an automatically-controlled deHavilland C-8A STOL aircraft was used to determine the response to these errors. The results show that the aircraft response to all of the Crissy Field MLS multipath errors was small. The small MLS azimuth and elevation multipath errors did not result in any discernible aircraft motion, and the aircraft response to the relatively large (200-ft (61-m) peak) DME multipath was noticeable but small.

  17. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  18. Aberrant error processing in relation to symptom severity in obsessive–compulsive disorder: A multimodal neuroimaging study

    PubMed Central

    Agam, Yigal; Greenberg, Jennifer L.; Isom, Marlisa; Falkenstein, Martha J.; Jenike, Eric; Wilhelm, Sabine; Manoach, Dara S.

    2014-01-01

    Background Obsessive–compulsive disorder (OCD) is characterized by maladaptive repetitive behaviors that persist despite feedback. Using multimodal neuroimaging, we tested the hypothesis that this behavioral rigidity reflects impaired use of behavioral outcomes (here, errors) to adaptively adjust responses. We measured both neural responses to errors and adjustments in the subsequent trial to determine whether abnormalities correlate with symptom severity. Since error processing depends on communication between the anterior and the posterior cingulate cortex, we also examined the integrity of the cingulum bundle with diffusion tensor imaging. Methods Participants performed the same antisaccade task during functional MRI and electroencephalography sessions. We measured error-related activation of the anterior cingulate cortex (ACC) and the error-related negativity (ERN). We also examined post-error adjustments, indexed by changes in activation of the default network in trials surrounding errors. Results OCD patients showed intact error-related ACC activation and ERN, but abnormal adjustments in the post- vs. pre-error trial. Relative to controls, who responded to errors by deactivating the default network, OCD patients showed increased default network activation including in the rostral ACC (rACC). Greater rACC activation in the post-error trial correlated with more severe compulsions. Patients also showed increased fractional anisotropy (FA) in the white matter underlying rACC. Conclusions Impaired use of behavioral outcomes to adaptively adjust neural responses may contribute to symptoms in OCD. The rACC locus of abnormal adjustment and relations with symptoms suggests difficulty suppressing emotional responses to aversive, unexpected events (e.g., errors). Increased structural connectivity of this paralimbic default network region may contribute to this impairment. PMID:25057466

  19. Elastic stability of laminated, flat and curved, long rectangular plates subjected to combined inplane loads

    NASA Technical Reports Server (NTRS)

    Viswanathan, A. V.; Tamekuni, M.; Baker, L. L.

    1974-01-01

    A method is presented to predict theoretical buckling loads of long, rectangular flat and curved laminated plates with arbitrary orientation of orthotropic axes each lamina. The plate is subjected to combined inplane normal and shear loads. Arbitrary boundary conditions may be stipulated along the longitudinal sides of the plate. In the absence of inplane shear loads and extensional-shear coupling, the analysis is also applicable to finite length plates. Numerical results are presented for curved laminated composite plates with boundary conditions and subjected to various loadings. These results indicate some of the complexities involved in the numerical solution of the analysis for general laminates. The results also show that the reduced bending stiffness approximation when applied to buckling problems could lead to considerable error in some cases and therefore must be used with caution.

  20. [Application of AOTF in spectral analysis. 2. Application of self-constructed visible AOTF spectrophotometer].

    PubMed

    Peng, Rong-fei; He, Jia-yao; Zhang, Zhan-xia

    2002-02-01

    The performances of a self-constructed visible AOTF spectrophotometer are presented. The wavelength calibration of AOTF1 and AOTF2 are performed with a didymium glass using a fourth-order polynomial curve fitting method. The absolute error of the peak position is usually less than 0.7 nm. Compared with the commercial UV1100 spectrophotometer, the scanning speed of the AOTF spectrophotometer is much more faster, but the resolution depends on the quality of AOTF. The absorption spectra and the calibration curves of copper sulfate and alizarin red obtained with AOTF1(Institute for Silicate, Shanghai China) and AOTF2 (Brimrose U.S.A) respectively are presented. Their corresponding correlation coefficients of the calibration curves are 0.9991 and 0.9990 respectively. Preliminary results show that the self-constructed AOTF spectrophotometer is feasible.

  1. Stage-discharge relations for Black Warrior River at Warrior Dam near Eutaw, Alabama; updated 1985

    USGS Publications Warehouse

    Nelson, G.H.; Ming, C.O.

    1986-01-01

    The construction of Warrior Dam, completed in 1962, has resulted in changes to the stage-discharge relations in the vicinity. The scarcity of current-meter measurements, coupled with backwater conditions, make definition of a single stage-discharge relation impossible without considerable error. However, as a useful alternative, limit curves were developed in 1983 that defined the limits of possible stage-discharge relations at the dam tailwater section. Since the 1983 report, 37 discharge values computed through the dam for the flood of December 1983 were used to verify or update the lower end of the limit curves. Data obtained from a current-meter measurement of the February 1961 flood (furnished by the U.S. Army Corps of Engineers) were used to update the upper end of the curves. This report presents the updated information. (USGS)

  2. Medical students' experiences with medical errors: an analysis of medical student essays.

    PubMed

    Martinez, William; Lo, Bernard

    2008-07-01

    This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.

  3. Computing daily mean streamflow at ungaged locations in Iowa by using the Flow Anywhere and Flow Duration Curve Transfer statistical methods

    USGS Publications Warehouse

    Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.

    2012-01-01

    The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.

  4. A vignette study to examine health care professionals' attitudes towards patient involvement in error prevention.

    PubMed

    Schwappach, David L B; Frank, Olga; Davis, Rachel E

    2013-10-01

    Various authorities recommend the participation of patients in promoting patient safety, but little is known about health care professionals' (HCPs') attitudes towards patients' involvement in safety-related behaviours. To investigate how HCPs evaluate patients' behaviours and HCP responses to patient involvement in the behaviour, relative to different aspects of the patient, the involved HCP and the potential error. Cross-sectional fractional factorial survey with seven factors embedded in two error scenarios (missed hand hygiene, medication error). Each survey included two randomized vignettes that described the potential error, a patient's reaction to that error and the HCP response to the patient. Twelve hospitals in Switzerland. A total of 1141 HCPs (response rate 45%). Approval of patients' behaviour, HCP response to the patient, anticipated effects on the patient-HCP relationship, HCPs' support for being asked the question, affective response to the vignettes. Outcomes were measured on 7-point scales. Approval of patients' safety-related interventions was generally high and largely affected by patients' behaviour and correct identification of error. Anticipated effects on the patient-HCP relationship were much less positive, little correlated with approval of patients' behaviour and were mainly determined by the HCP response to intervening patients. HCPs expressed more favourable attitudes towards patients intervening about a medication error than about hand sanitation. This study provides the first insights into predictors of HCPs' attitudes towards patient engagement in safety. Future research is however required to assess the generalizability of the findings into practice before training can be designed to address critical issues. © 2012 John Wiley & Sons Ltd.

  5. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  6. Global Erratum for Kepler Q0-Q17 and K2 C0-C5 Short Cadence Data

    NASA Technical Reports Server (NTRS)

    Caldwell, Douglas; Van Cleve, Jeffrey E.

    2016-01-01

    An accounting error has scrambled much of the short-cadence collateral smear data used to correct for the effects of Keplers shutterless readout. This error has been present since launch and affects approximately half of all short-cadence targets observed by Kepler and K2 to date. The resulting calibration errors are present in both the short-cadence target pixel files and the short-cadence light curves for Kepler Data Releases 1-24 and K2 Data Releases 1-7. This error does not affect long-cadence data. Since it will take some time to correct this error and reprocess all Kepler and K2 data, a list of affected targets is provided. Even though the affected targets are readily identified, the science impact for any particular target may be difficult to assess. Since the smear signal is often small compared to the target signal, the effect is negligible for many targets. However, the smear signal is scene-dependent, so time varying signals can be introduced into any target by the other stars falling on the same CCD column. Some tips on how to assess the severity of the calibration error are provided in this document.

  7. Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range

    NASA Astrophysics Data System (ADS)

    Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong

    2011-12-01

    Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.

  8. Use of dual coolant displacing media for in-process optical measurement of form profiles

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Xie, F.

    2018-07-01

    In-process measurement supports feedback control to reduce workpiece surface form error. Without it, the workpiece surface must be measured offline causing significant errors in workpiece positioning and reduced productivity. To offer better performance, a new in-process optical measurement method based on the use of dual coolant displacing media is proposed and studied, which uses an air and liquid phase together to resist coolant and to achieve in-process measurement. In the proposed new design, coolant is used to replace the previously used clean water to avoid coolant dilution. Compared with the previous methods, the distance between the applicator and the workpiece surface can be relaxed to 1 mm. The result is 4 times larger than before, thus permitting measurement of curved surfaces. The use of air is up to 1.5 times less than the best method previously available. For a sample workpiece with curved surfaces, the relative error of profile measurement under coolant conditions can be as small as 0.1% compared with the one under no coolant conditions. Problems in comparing measured 3D surfaces are discussed. A comparative study between a Bruker Npflex optical profiler and the developed new in-process optical profiler was conducted. For a surface area of 5.5 mm  ×  5.5 mm, the average measurement error under coolant conditions is only 0.693 µm. In addition, the error due to the new method is only 0.10 µm when compared between coolant and no coolant conditions. The effect of a thin liquid film on workpiece surface is discussed. The experimental results show that the new method can successfully solve the coolant dilution problem and is able to accurately measure the workpiece surface whilst fully submerged in the opaque coolant. The proposed new method is advantageous and should be very useful for in-process optical form profile measurement in precision machining.

  9. Recognition Errors Suggest Fast Familiarity and Slow Recollection in Rhesus Monkeys

    ERIC Educational Resources Information Center

    Basile, Benjamin M.; Hampton, Robert R.

    2013-01-01

    One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses…

  10. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2013-01-01

    The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

  11. LOCATING NEARBY SOURCES OF AIR POLLUTION BY NONPARAMETRIC REGRESSION OF ATMOSPHERIC CONCENTRATIONS ON WIND DIRECTION. (R826238)

    EPA Science Inventory

    The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...

  12. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  13. EVEREST: Pixel Level Decorrelation of K2 Light Curves

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Agol, Eric; Kruse, Ethan; Barnes, Rory; Becker, Andrew; Foreman-Mackey, Daniel; Deming, Drake

    2016-10-01

    We present EPIC Variability Extraction and Removal for Exoplanet Science Targets (EVEREST), an open-source pipeline for removing instrumental noise from K2 light curves. EVEREST employs a variant of pixel level decorrelation to remove systematics introduced by the spacecraft’s pointing error and a Gaussian process to capture astrophysical variability. We apply EVEREST to all K2 targets in campaigns 0-7, yielding light curves with precision comparable to that of the original Kepler mission for stars brighter than {K}p≈ 13, and within a factor of two of the Kepler precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that EVEREST achieves the highest average precision of any of these pipelines for unsaturated K2 stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The EVEREST pipeline can also easily be applied to future surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The EVEREST light curves and the source code used to generate them are freely available online.

  14. Model comparison for Escherichia coli growth in pouched food.

    PubMed

    Fujikawa, Hiroshi; Yano, Kazuyoshi; Morozumi, Satoshi

    2006-06-01

    We recently studied the growth characteristics of Escherichia coli cells in pouched mashed potatoes (Fujikawa et al., J. Food Hyg. Soc. Japan, 47, 95-98 (2006)). Using those experimental data, in the present study, we compared a logistic model newly developed by us with the modified Gompertz and the Baranyi models, which are used as growth models worldwide. Bacterial growth curves at constant temperatures in the range of 12 to 34 degrees C were successfully described with the new logistic model, as well as with the other models. The Baranyi gave the least error in cell number and our model gave the least error in the rate constant and the lag period. For dynamic temperature, our model successfully predicted the bacterial growth, whereas the Baranyi model considerably overestimated it. Also, there was a discrepancy between the growth curves described with the differential equations of the Baranyi model and those obtained with DMfit, a software program for Baranyi model fitting. These results indicate that the new logistic model can be used to predict bacterial growth in pouched food.

  15. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    NASA Astrophysics Data System (ADS)

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2013-04-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity.

  16. Shunt resistance and saturation current determination in CdTe and CIGS solar cells. Part 2: application to experimental IV measurements and comparison with other methods

    NASA Astrophysics Data System (ADS)

    Rangel-Kuoppa, Victor-Tapio; Albor-Aguilera, María-de-Lourdes; Hérnandez-Vásquez, César; Flores-Márquez, José-Manuel; Jiménez-Olarte, Daniel; Sastré-Hernández, Jorge; González-Trujillo, Miguel-Ángel; Contreras-Puente, Gerardo-Silverio

    2018-04-01

    In this Part 2 of this series of articles, the procedure proposed in Part 1, namely a new parameter extraction technique of the shunt resistance (R sh ) and saturation current (I sat ) of a current-voltage (I-V) measurement of a solar cell, within the one-diode model, is applied to CdS-CdTe and CIGS-CdS solar cells. First, the Cheung method is used to obtain the series resistance (R s ) and the ideality factor n. Afterwards, procedures A and B proposed in Part 1 are used to obtain R sh and I sat . The procedure is compared with two other commonly used procedures. Better accuracy on the simulated I-V curves used with the parameters extracted by our method is obtained. Also, the integral percentage errors from the simulated I-V curves using the method proposed in this study are one order of magnitude smaller compared with the integral percentage errors using the other two methods.

  17. Estimation of particulate nutrient load using turbidity meter.

    PubMed

    Yamamoto, K; Suetsugi, T

    2006-01-01

    The "Nutrient Load Hysteresis Coefficient" was proposed to evaluate the hysteresis of the nutrient loads to flow rate quantitatively. This could classify the runoff patterns of nutrient load into 15 patterns. Linear relationships between the turbidity and the concentrations of particulate nutrients were observed. It was clarified that the linearity was caused by the influence of the particle size on turbidity output and accumulation of nutrients on smaller particles (diameter < 23 microm). The L-Q-Turb method, which is a new method for the estimation of runoff loads of nutrients using a regression curve between the turbidity and the concentrations of particulate nutrients, was developed. This method could raise the precision of the estimation of nutrient loads even if they had strong hysteresis to flow rate. For example, as for the runoff load of total phosphorus load on flood events in a total of eight cases, the averaged error of estimation of total phosphorus load by the L-Q-Turb method was 11%, whereas the averaged estimation error by the regression curve between flow rate and nutrient load was 28%.

  18. Flight demonstrations of curved, descending approaches and automatic landings using time referenced scanning beam guidance

    NASA Technical Reports Server (NTRS)

    White, W. F. (Compiler)

    1978-01-01

    The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.

  19. Mnemonic strategies in older people: a comparison of errorless and errorful learning.

    PubMed

    Kessels, Roy P C; de Haan, Edward H F

    2003-09-01

    To compare the efficacy of errorless and errorful learning on memory performance in older people and young adults. Face-name association learning was examined in 18 older people and 16 young controls. Subjects were either prompted to guess the correct name during the presentation of photographs of unknown faces (errorful learning) or were instructed to study the face without guessing (errorless learning). The correct name was given after the presentation of each face in both task conditions. Uncued testing followed immediately after the two study phases and after a 10-minute delay. Older subjects had an overall lower memory performance and flatter learning curves compared to the young adults, regardless of task conditions. Also, errorless learning resulted in a higher accuracy than errorful learning, to an equal amount in both groups. Older people have difficulty in the encoding stages of face-name association learning, whereas retrieval is relatively unaffected. In addition, the prevention of errors occurring during learning results in a better memory performance, and is perhaps an effective strategy for coping with age-related memory decrement.

  20. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

Top