NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Mengistu, Zelalem
2016-12-01
In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Space shuttle propulsion parameter estimation using optional estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.
NASA Astrophysics Data System (ADS)
Skaugen, T.; Mengistu, Z.
2015-10-01
In this study we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady-state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff and hence estimated prior to calibration. Key principles guiding the evaluation of the new subsurface storage routine have been (a) to minimize the number of parameters to be estimated through the, often arbitrary fitting to optimize runoff predictions (calibration) and (b) maximize the range of testing conditions (i.e. large-sample hydrology). The new storage routine has been implemented in the already parameter parsimonious Distance Distribution Dynamics (DDD) model and tested for 73 catchments in Norway of varying size, mean elevations and landscape types. Runoff simulations for the 73 catchments from two model structures; DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage were compared. No loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe Efficiency criterion of 0.68 was found using the new estimated storage routine compared with 0.66 using calibrated storage routine. The average Kling-Gupta Efficiency criterion was 0.69 and 0.70 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recessions was reduced by almost 50 % using the new storage routine.
INDIRECT ESTIMATION OF CONVECTIVE BOUNDARY LAYER STRUCTURE FOR USE IN ROUTINE DISPERSION MODELS
Dispersion models of the convectively driven atmospheric boundary layer (ABL) often require as input meteorological parameters that are not routinely measured. These parameters usually include (but are not limited to) the surface heat and momentum fluxes, the height of the cappin...
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
ERIC Educational Resources Information Center
DeMars, Christine E.
2012-01-01
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1980-01-01
A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.
A new approach to estimating trends in chlamydia incidence.
Ali, Hammad; Cameron, Ewan; Drovandi, Christopher C; McCaw, James M; Guy, Rebecca J; Middleton, Melanie; El-Hayek, Carol; Hocking, Jane S; Kaldor, John M; Donovan, Basil; Wilson, David P
2015-11-01
Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method for estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shapes of these beta parameters were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour small changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures, that is, prevalence and incidence as reported by other studies. Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ∼120% over 12 years. Nationally, an estimated 356 000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (∼90% increase). We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, W. M.
1977-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. FORTRAN subroutines have been developed to facilitate analyses of a variety of parameter estimation problems. Easy to use multipurpose sets of algorithms are reported that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used.
An evaluation of percentile and maximum likelihood estimators of weibull paremeters
Stanley J. Zarnoch; Tommy R. Dell
1985-01-01
Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
ProUCL Version 4.0 Technical Guide
Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...
GEODYN programmers guide, volume 2, part 1
NASA Technical Reports Server (NTRS)
Mullins, N. E.; Goad, C. C.; Dao, N. C.; Martin, T. V.; Boulware, N. L.; Chin, M. M.
1972-01-01
A guide to the GEODYN Program is presented. The program estimates orbit and geodetic parameters. It possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fit the entire set of estimated parameters. GEODYN consists of 113 different program segments, including the main program, subroutines, functions, and block data routines. All are in G or H level FORTRAN and are currently operational on GSFC's IBM 360/95 and IBM 360/91.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Fetal kidney length as a useful adjunct parameter for better determination of gestational age.
Ugur, Mete G; Mustafa, Aynur; Ozcan, Huseyin C; Tepe, Neslihan B; Kurt, Huseyin; Akcil, Emre; Gunduz, Reyhan
2016-05-01
To determine the validity of fetal kidney length and amniotic fluid index (AFI) in labor dating. This prospective study included 180 pregnant women followed up in the outpatient clinic at the Department of Obstetrics and Gynecology, Gaziantep University, Turkey, between January 2014 and January 2015. The gestational age (GA) was estimated by early fetal ultrasound measures and last menstrual period. Routine fetal biometric parameters, fetal kidney length, and amniotic fluid index were measured. We studied the correlation between fetal kidney length, amniotic fluid index, and gestational age. The mean gestational age depending on last menstrual period and early ultrasound was 31.98±4.29 (24-39 weeks). The mean kidney length was 35.66±6.61 (19-49 mm). There was a significant correlation between gestational age and fetal kidney length (r=0.947, p=0.001). However, there was a moderate negative correlation between GA and AFI. Adding fetal kidney length to the routine biometrics improved the effectiveness of the model used to estimate GA (R2=0.965 to R2=0.987). Gestational age can be better predicted by adding fetal kidney length to other routine parameters.
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2013-12-01
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1980-01-01
A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B; Gregson, Simon; Eaton, Jeffrey W; Bao, Le
2017-04-01
HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women and can be used to improve estimates of national and subnational HIV prevalence trends. We develop methods to incorporate these new data source into the Joint United Nations Programme on AIDS Estimation and Projection Package in Spectrum 2017. We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (site-level) or regionally (census-level), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends and should be tested as more data become available from national ANC-RT programs.
Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B.; Gregson, Simon; Eaton, Jeffrey W.; Bao, Le
2017-01-01
Objective HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women, and can be used to improve estimates of national and sub-national HIV prevalence trends. We develop methods to incorporate this new data source into the UNAIDS Estimation and Projection Package (EPP) in Spectrum 2017. Methods We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (‘site-level’) or regionally (‘census-level’), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. Results We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. Conclusion We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends, and should be tested as more data become available from national ANC-RT programs. PMID:28296804
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
CERES: A Set of Automated Routines for Echelle Spectra
NASA Astrophysics Data System (ADS)
Brahm, Rafael; Jordán, Andrés; Espinoza, Néstor
2017-03-01
We present the Collection of Elemental Routines for Echelle Spectra (CERES). These routines were developed for the construction of automated pipelines for the reduction, extraction, and analysis of spectra acquired with different instruments, allowing the obtention of homogeneous and standardized results. This modular code includes tools for handling the different steps of the processing: CCD image reductions; identification and tracing of the echelle orders; optimal and rectangular extraction; computation of the wavelength solution; estimation of radial velocities; and rough and fast estimation of the atmospheric parameters. Currently, CERES has been used to develop automated pipelines for 13 different spectrographs, namely CORALIE, FEROS, HARPS, ESPaDOnS, FIES, PUCHEROS, FIDEOS, CAFE, DuPont/Echelle, Magellan/Mike, Keck/HIRES, Magellan/PFS, and APO/ARCES, but the routines can be easily used to deal with data coming from other spectrographs. We show the high precision in radial velocity that CERES achieves for some of these instruments, and we briefly summarize some results that have already been obtained using the CERES pipelines.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Maxine: A spreadsheet for estimating dose from chronic atmospheric radioactive releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, Tim; Bell, Evaleigh; Dixon, Kenneth
MAXINE is an EXCEL© spreadsheet, which is used to estimate dose to individuals for routine and accidental atmospheric releases of radioactive materials. MAXINE does not contain an atmospheric dispersion model, but rather doses are estimated using air and ground concentrations as input. Minimal input is required to run the program and site specific parameters are used when possible. Complete code description, verification of models, and user’s manual have been included.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
Bootstrap Methods: A Very Leisurely Look.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Winstead, Wayland H.
The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Naci, Huseyin; de Lissovoy, Gregory; Hollenbeak, Christopher; Custer, Brian; Hofmann, Axel; McClellan, William; Gitlin, Matthew
2012-01-01
To determine whether Medicare's decision to cover routine administration of erythropoietin stimulating agents (ESAs) to treat anemia of end-stage renal disease (ESRD) has been a cost-effective policy relative to standard of care at the time. The authors used summary statistics from the actual cohort of ESRD patients receiving ESAs between 1995 and 2004 to create a simulated patient cohort, which was compared with a comparable simulated cohort assumed to rely solely on blood transfusions. Outcomes modeled from the Medicare perspective included estimated treatment costs, life-years gained, and quality-adjusted life-years (QALYs). Incremental cost-effectiveness ratio (ICER) was calculated relative to the hypothetical reference case of no ESA use in the transfusion cohort. Sensitivity of the results to model assumptions was tested using one-way and probabilistic sensitivity analyses. Estimated total costs incurred by the ESRD population were $155.47B for the cohort receiving ESAs and $155.22B for the cohort receiving routine blood transfusions. Estimated QALYs were 2.56M and 2.29M, respectively, for the two groups. The ICER of ESAs compared to routine blood transfusions was estimated as $873 per QALY gained. The model was sensitive to a number of parameters according to one-way and probabilistic sensitivity analyses. This model was counter-factual as the actual comparison group, whose anemia was managed via transfusion and iron supplements, rapidly disappeared following introduction of ESAs. In addition, a large number of model parameters were obtained from observational studies due to the lack of randomized trial evidence in the literature. This study indicates that Medicare's coverage of ESAs appears to have been cost effective based on commonly accepted levels of willingness-to-pay. The ESRD population achieved substantial clinical benefit at a reasonable cost to society.
Joseph L. Ganey; Regis H. Cassidy; William M. Block
2008-01-01
Canopy cover has been identified as an important correlate of Mexican spotted owl (Strix occidentalis lucida) habitat, yet management guidelines in a 1995 U.S. Fish and Wildlife Service recovery plan for the Mexican spotted owl did not address canopy cover. These guidelines emphasized parameters included in U.S. Forest Service stand exams, and...
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Feasibility of dual-energy computed tomography in radiation therapy planning
NASA Astrophysics Data System (ADS)
Sheen, Heesoon; Shin, Han-Back; Cho, Sungkoo; Cho, Junsang; Han, Youngyih
2017-12-01
In this study, the noise level, effective atomic number ( Z eff), accuracy of the computed tomography (CT) number, and the CT number to the relative electron density EDconversion curve were estimated for virtual monochromatic energy and polychromatic energy. These values were compared to the theoretically predicted values to investigate the feasibility of the use of dual-energy CT in routine radiation therapy planning. The accuracies of the parameters were within the range of acceptability. These results can serve as a stepping stone toward the routine use of dual-energy CT in radiotherapy planning.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
NASA Astrophysics Data System (ADS)
Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno
2013-09-01
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).
HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.
Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C
2004-07-01
A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.
Ascent/descent ancillary data production user's guide
NASA Technical Reports Server (NTRS)
Brans, H. R.; Seacord, A. W., II; Ulmer, J. W.
1986-01-01
The Ascent/Descent Ancillary Data Product, also called the A/D BET because it contains a Best Estimate of the Trajectory (BET), is a collection of trajectory, attitude, and atmospheric related parameters computed for the ascent and descent phases of each Shuttle Mission. These computations are executed shortly after the event in a post-flight environment. A collection of several routines including some stand-alone routines constitute what is called the Ascent/Descent Ancillary Data Production Program. A User's Guide for that program is given. It is intended to provide the reader with all the information necessary to generate an Ascent or a Descent Ancillary Data Product. It includes descriptions of the input data and output data for each routine, and contains explicit instructions on how to run each routine. A description of the final output product is given.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Astrophysics Data System (ADS)
Liu, Xiaomang; Liu, Changming; Brutsaert, Wilfried
2016-12-01
The performance of a nonlinear formulation of the complementary principle for evaporation estimation was investigated in 241 catchments with different climate conditions in the eastern monsoon region of China. Evaporation (Ea) calculated by the water balance equation was used as the reference. Ea estimated by the calibrated nonlinear formulation was generally in good agreement with the water balance results, especially in relatively dry catchments. The single parameter in the nonlinear formulation, namely αe as a weak analog of the alpha parameter of Priestley and Taylor (), tended to exhibit larger values in warmer and humid near-coastal areas, but smaller values in colder, drier environments inland, with a significant dependency on the aridity index (AI). The nonlinear formulation combined with the equation relating the one parameter and AI provides a promising method to estimate regional Ea with standard and routinely measured meteorological data.
Analysis of counting data: Development of the SATLAS Python package
NASA Astrophysics Data System (ADS)
Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.
2018-01-01
For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Predicting responses from Rasch measures.
Linacre, John M
2010-01-01
There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.
The pEst version 2.1 user's manual
NASA Technical Reports Server (NTRS)
Murray, James E.; Maine, Richard E.
1987-01-01
This report is a user's manual for version 2.1 of pEst, a FORTRAN 77 computer program for interactive parameter estimation in nonlinear dynamic systems. The pEst program allows the user complete generality in definig the nonlinear equations of motion used in the analysis. The equations of motion are specified by a set of FORTRAN subroutines; a set of routines for a general aircraft model is supplied with the program and is described in the report. The report also briefly discusses the scope of the parameter estimation problem the program addresses. The report gives detailed explanations of the purpose and usage of all available program commands and a description of the computational algorithms used in the program.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Mclean, Elizabeth L; Forrester, Graham E
2018-04-01
We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more negative majority view. Although fishers' and scientific estimates of size at maturity and maximum size parameters sometimes differed, the fact that fishers make routine quantitative assessments of maturity and body size suggests potential for future collaborative monitoring efforts to generate estimates usable by scientists and meaningful to fishers. © 2017 by the Ecological Society of America.
System IDentification Programs for AirCraft (SIDPAC)
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2002-01-01
A collection of computer programs for aircraft system identification is described and demonstrated. The programs, collectively called System IDentification Programs for AirCraft, or SIDPAC, were developed in MATLAB as m-file functions. SIDPAC has been used successfully at NASA Langley Research Center with data from many different flight test programs and wind tunnel experiments. SIDPAC includes routines for experiment design, data conditioning, data compatibility analysis, model structure determination, equation-error and output-error parameter estimation in both the time and frequency domains, real-time and recursive parameter estimation, low order equivalent system identification, estimated parameter error calculation, linear and nonlinear simulation, plotting, and 3-D visualization. An overview of SIDPAC capabilities is provided, along with a demonstration of the use of SIDPAC with real flight test data from the NASA Glenn Twin Otter aircraft. The SIDPAC software is available without charge to U.S. citizens by request to the author, contingent on the requestor completing a NASA software usage agreement.
Rapid earthquake hazard and loss assessment for Euro-Mediterranean region
NASA Astrophysics Data System (ADS)
Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru
2010-10-01
The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.
Estimation of kinetic parameters from list-mode data using an indirect apporach
NASA Astrophysics Data System (ADS)
Ortiz, Joseph Christian
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.
The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less
Effects of sampling close relatives on some elementary population genetics analyses.
Wang, Jinliang
2018-01-01
Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017 John Wiley & Sons Ltd.
Adeyekun, A A; Orji, M O
2014-04-01
To compare the predictive accuracy of foetal trans-cerebellar diameter (TCD) with those of other biometric parameters in the estimation of gestational age (GA). A cross-sectional study. The University of Benin Teaching Hospital, Nigeria. Four hundred and fifty healthy singleton pregnant women, between 14-42 weeks gestation. Trans-cerebellar diameter (TCD), biparietal diameter (BPD), femur length (FL), abdominal circumference (AC) values across the gestational age range studied. Correlation and predictive values of TCD compared to those of other biometric parameters. The range of values for TCD was 11.9 - 59.7mm (mean = 34.2 ± 14.1mm). TCD correlated more significantly with menstrual age compared with other biometric parameters (r = 0.984, p = 0.000). TCD had a higher predictive accuracy of 96.9% ± 12 days), BPD (93.8% ± 14.1 days). AC (92.7% ± 15.3 days). TCD has a stronger predictive accuracy for gestational age compared to other routinely used foetal biometric parameters among Nigerian Africans.
2013-01-01
Background Recent studies have found high prevalences of asymptomatic rectal chlamydia among HIV-infected men who have sex with men (MSM). Chlamydia could increase the infectivity of HIV and the susceptibility to HIV infection. We investigate the role of chlamydia in the spread of HIV among MSM and the possible impact of routine chlamydia screening among HIV-infected MSM at HIV treatment centres on the incidence of chlamydia and HIV in the overall MSM population. Methods A mathematical model was developed to describe the transmission of HIV and chlamydia among MSM. Parameters relating to sexual behaviour were estimated from data from the Amsterdam Cohort Study among MSM. Uncertainty analysis was carried out for model parameters without confident estimates. The effects of different screening strategies for chlamydia were investigated. Results Among all new HIV infections in MSM, 15% can be attributed to chlamydia infection. Introduction of routine chlamydia screening every six months among HIV-infected MSM during regular HIV consultations can reduce the incidence of both infections among MSM: after 10 years, the relative percentage reduction in chlamydia incidence would be 15% and in HIV incidence 4%, compared to the current situation. Chlamydia screening is more effective in reducing HIV incidence with more frequent screening and with higher participation of the most risky MSM in the screening program. Conclusions Chlamydia infection could contribute to the transmission of HIV among MSM. Preventive measures reducing chlamydia prevalence, such as routine chlamydia screening of HIV-infected MSM, can result in a decline in the incidence of chlamydia and HIV. PMID:24047261
Nader, Ahmed; Zahran, Noran; Alshammaa, Aya; Altaweel, Heba; Kassem, Nancy; Wilby, Kyle John
2017-04-01
Clinical response to methotrexate in cancer is variable and depends on several factors including serum drug exposure. This study aimed to develop a population pharmacokinetic model describing methotrexate disposition in cancer patients using retrospective chart review data available from routine clinical practice. A retrospective review of medical records was conducted for cancer patients in Qatar. Relevant data (methotrexate dosing/concentrations from multiple occasions, patient history, and laboratory values) were extracted and analyzed using NONMEM VII ® . A population pharmacokinetic model was developed and used to estimate inter-individual and inter-occasion variability terms on methotrexate pharmacokinetic parameters, as well as patient factors affecting methotrexate pharmacokinetics. Methotrexate disposition was described by a two-compartment model with clearance (CL) of 15.7 L/h and central volume of distribution (V c ) of 79.2 L. Patient weight and hematocrit levels were significant covariates on methotrexate V c and CL, respectively. Methotrexate CL changed by 50 % with changes in hematocrit levels from 23 to 50 %. Inter-occasion variability in methotrexate CL was estimated for patients administered the drug on multiple occasions (48 and 31 % for 2nd and 3rd visits, respectively). Therapeutic drug monitoring data collected during routine clinical practice can provide a useful tool for understanding factors affecting methotrexate pharmacokinetics. Patient weight and hematocrit levels may play a clinically important role in determining methotrexate serum exposure and dosing requirements. Future prospective studies are needed to validate results of the developed model and evaluate its usefulness to predict methotrexate exposure and optimize dosing regimens.
NASA Technical Reports Server (NTRS)
Bjorkman, W. S.; Uphoff, C. W.
1973-01-01
This Parameter Estimation Supplement describes the PEST computer program and gives instructions for its use in determination of lunar gravitation field coefficients. PEST was developed for use in the RAE-B lunar orbiting mission as a means of lunar field recovery. The observations processed by PEST are short-arc osculating orbital elements. These observations are the end product of an orbit determination process obtained with another program. PEST's end product it a set of harmonic coefficients to be used in long-term prediction of the lunar orbit. PEST employs some novel techniques in its estimation process, notably a square batch estimator and linear variational equations in the orbital elements (both osculating and mean) for measurement sensitivities. The program's capabilities are described, and operating instructions and input/output examples are given. PEST utilizes MAESTRO routines for its trajectory propagation. PEST's program structure and subroutines which are not common to MAESTRO are described. Some of the theoretical background information for the estimation process, and a derivation of linear variational equations for the Method 7 elements are included.
NASA Astrophysics Data System (ADS)
Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon
2015-08-01
For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.
Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.
Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H
2010-02-01
Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Leischik, Roman; Littwitz, Henning; Dworrak, Birgit; Garg, Pankaj; Zhu, Meihua; Sahn, David J; Horlitz, Marc
2015-01-01
Left atrial (LA) functional analysis has an established role in assessing left ventricular diastolic function. The current standard echocardiographic parameters used to study left ventricular diastolic function include pulsed-wave Doppler mitral inflow analysis, tissue Doppler imaging measurements, and LA dimension estimation. However, the above-mentioned parameters do not directly quantify LA performance. Deformation studies using strain and strain-rate imaging to assess LA function were validated in previous research, but this technique is not currently used in routine clinical practice. This review discusses the history, importance, and pitfalls of strain technology for the analysis of LA mechanics.
Candela-Toha, Ángel; Pardo, María Carmen; Pérez, Teresa; Muriel, Alfonso; Zamora, Javier
2018-04-20
and objective Acute kidney injury (AKI) diagnosis is still based on serum creatinine and diuresis. However, increases in creatinine are typically delayed 48h or longer after injury. Our aim was to determine the utility of routine postoperative renal function blood tests, to predict AKI one or 2days in advance in a cohort of cardiac surgery patients. Using a prospective database, we selected a sample of patients who had undergone major cardiac surgery between January 2002 and December 2013. The ability of the parameters to predict AKI was based on Acute Kidney Injury Network serum creatinine criteria. A cohort of 3,962 cases was divided into 2groups of similar size, one being exploratory and the other a validation sample. The exploratory group was used to show primary objectives and the validation group to confirm results. The ability to predict AKI of several kidney function parameters measured in routine postoperative blood tests, was measured with time-dependent ROC curves. The primary endpoint was time from measurement to AKI diagnosis. AKI developed in 610 (30.8%) and 623 (31.4%) patients in the exploratory and validation samples, respectively. Estimated glomerular filtration rate using the MDRD-4 equation showed the best AKI prediction capacity, with values for the AUC ROC curves between 0.700 and 0.946. We obtained different cut-off values for estimated glomerular filtration rate depending on the degree of AKI severity and on the time elapsed between surgery and parameter measurement. Results were confirmed in the validation sample. Postoperative estimated glomerular filtration rate using the MDRD-4 equation showed good ability to predict AKI following cardiac surgery one or 2days in advance. Copyright © 2018 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Abell, J. T.; Jacobsen, J.; Bjorkstedt, E.
2016-02-01
Determining aragonite saturation state (Ω) in seawater requires measurement of two parameters of the carbonate system: most commonly dissolved inorganic carbon (DIC) and total alkalinity (TA). The routine measurement of DIC and TA is not always possible on frequently repeated hydrographic lines or at moored-time series that collect hydrographic data at short time intervals. In such cases a proxy can be developed that relates the saturation state as derived from one time or infrequent DIC and TA measurements (Ωmeas) to more frequently measured parameters such as dissolved oxygen (DO) and temperature (Temp). These proxies are generally based on best-fit parameterizations that utilize references values of DO and Temp and adjust linear coefficients until the error between the proxy-derived saturation state (Ωproxy) and Ωmeas is minimized. Proxies have been used to infer Ω from moored hydrographic sensors and gliders which routinely collect DO and Temp data but do not include carbonate parameter measurements. Proxies can also calculate Ω in regional oceanographic models which do not explicitly include carbonate parameters. Here we examine the variability and accuracy of Ωproxy along a near-shore hydrographic line and a moored-time series stations at Trinidad Head, CA. The saturation state is determined using proxies from different coastal regions of the California Current Large Marine Ecosystem and from different years of sampling along the hydrographic line. We then calculate the variability and error associated with the use of different proxy coefficients, the sensitivity to reference values and the inclusion of additional variables. We demonstrate how this variability affects estimates of the intensity and duration of exposure to aragonite corrosive conditions on the near-shore shelf and in the water column.
Wang, Xuan; Tandeo, Pierre; Fablet, Ronan; Husson, Romain; Guan, Lei; Chen, Ge
2016-01-01
The swell propagation model built on geometric optics is known to work well when simulating radiated swells from a far located storm. Based on this simple approximation, satellites have acquired plenty of large samples on basin-traversing swells induced by fierce storms situated in mid-latitudes. How to routinely reconstruct swell fields with these irregularly sampled observations from space via known swell propagation principle requires more examination. In this study, we apply 3-h interval pseudo SAR observations in the ensemble Kalman filter (EnKF) to reconstruct a swell field in ocean basin, and compare it with buoy swell partitions and polynomial regression results. As validated against in situ measurements, EnKF works well in terms of spatial–temporal consistency in far-field swell propagation scenarios. Using this framework, we further address the influence of EnKF parameters, and perform a sensitivity analysis to evaluate estimations made under different sets of parameters. Such analysis is of key interest with respect to future multiple-source routinely recorded swell field data. Satellite-derived swell data can serve as a valuable complementary dataset to in situ or wave re-analysis datasets. PMID:27898005
Mark E. Harmon; Christopher W. Woodall; Becky Fasth; Jay Sexton; Misha Yatkov
2011-01-01
Woody detritus or dead wood is an important part of forest ecosystems and has become a routine facet of forest monitoring and inventory. Biomass and carbon estimates of dead wood depend on knowledge of species- and decay class-specifi c density or density reduction factors. While some progress has been made in determining these parameters for dead and downed trees (DD...
A dynamical-systems approach for computing ice-affected streamflow
Holtschlag, David J.
1996-01-01
A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.
Akumu, Angela Oloo; English, Mike; Scott, J Anthony G; Griffiths, Ulla K
2007-07-01
Haemophilus influenzae type b (Hib) vaccine was introduced into routine immunization services in Kenya in 2001. We aimed to estimate the cost-effectiveness of Hib vaccine delivery. A model was developed to follow the Kenyan 2004 birth cohort until death, with and without Hib vaccine. Incidence of invasive Hib disease was estimated at Kilifi District Hospital and in the surrounding demographic surveillance system in coastal Kenya. National Hib disease incidence was estimated by adjusting incidence observed by passive hospital surveillance using assumptions about access to care. Case fatality rates were also assumed dependent on access to care. A price of US$ 3.65 per dose of pentavalent diphtheria-tetanus-pertussis-hep B-Hib vaccine was used. Multivariate Monte Carlo simulations were performed in order to assess the impact on the cost-effectiveness ratios of uncertainty in parameter values. The introduction of Hib vaccine reduced the estimated incidence of Hib meningitis per 100,000 children aged < 5 years from 71 to 8; of Hib non-meningitic invasive disease from 61 to 7; and of non-bacteraemic Hib pneumonia from 296 to 34. The costs per discounted disability adjusted life year (DALY) and per discounted death averted were US$ 38 (95% confidence interval, CI: 26-63) and US$ 1197 (95% CI: 814-2021) respectively. Most of the uncertainty in the results was due to uncertain access to care parameters. The break-even pentavalent vaccine price--where incremental Hib vaccination costs equal treatment costs averted from Hib disease--was US$ 1.82 per dose. Hib vaccine is a highly cost-effective intervention in Kenya. It would be cost-saving if the vaccine price was below half of its present level.
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
Quantifying Selection with Pool-Seq Time Series Data.
Taus, Thomas; Futschik, Andreas; Schlötterer, Christian
2017-11-01
Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Eaton, Jeffrey W.; Bao, Le
2017-01-01
Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
ADMAP (automatic data manipulation program)
NASA Technical Reports Server (NTRS)
Mann, F. I.
1971-01-01
Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.
Son, H S; Hong, Y S; Park, W M; Yu, M A; Lee, C H
2009-03-01
To estimate true Brix and alcoholic strength of must and wines without distillation, a novel approach using a refractometer and a hydrometer was developed. Initial Brix (I.B.), apparent refractometer Brix (A.R.), and apparent hydrometer Brix (A.H.) of must were measured by refractometer and hydrometer, respectively. Alcohol content (A) was determined with a hydrometer after distillation and true Brix (T.B.) was measured in distilled wines using a refractometer. Strong proportional correlations among A.R., A.H., T.B., and A in sugar solutions containing varying alcohol concentrations were observed in preliminary experiments. Similar proportional relationships among the parameters were also observed in must, which is a far more complex system than the sugar solution. To estimate T.B. and A of must during alcoholic fermentation, a total of 6 planar equations were empirically derived from the relationships among the experimental parameters. The empirical equations were then tested to estimate T.B. and A in 17 wine products, and resulted in good estimations of both quality factors. This novel approach was rapid, easy, and practical for use in routine analyses or for monitoring quality of must during fermentation and final wine products in a winery and/or laboratory.
Parameter Balancing in Kinetic Models of Cell Metabolism†
2010-01-01
Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890
Seasonal station variations in the Vienna VLBI terrestrial reference frame VieTRF16a
NASA Astrophysics Data System (ADS)
Krásná, Hana; Böhm, Johannes; Madzak, Matthias
2017-04-01
The special analysis center of the International Very Long Baseline Interferometry (VLBI) Service for Geodesy and Astrometry (IVS) at TU Wien (VIE) routinely analyses the VLBI measurements and estimates its own Terrestrial Reference Frame (TRF) solutions. We present our latest solution VieTRF16a (1979.0 - 2016.5) computed with the software VieVS version 3.0. Several recent updates of the software have been applied, e.g., the estimation of annual and semi-annual station variations as global parameters. The VieTRF16a is determined in the form of the conventional model (station position and its linear velocity) simultaneously with the celestial reference frame and Earth orientation parameters. In this work, we concentrate on the seasonal station variations in the residual time series and compare our TRF with the three combined TRF solutions ITRF2014, DTRF2014 and JTRF2014.
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
Reliable evaluation of the quantal determinants of synaptic efficacy using Bayesian analysis
Beato, M.
2013-01-01
Communication between neurones in the central nervous system depends on synaptic transmission. The efficacy of synapses is determined by pre- and postsynaptic factors that can be characterized using quantal parameters such as the probability of neurotransmitter release, number of release sites, and quantal size. Existing methods of estimating the quantal parameters based on multiple probability fluctuation analysis (MPFA) are limited by their requirement for long recordings to acquire substantial data sets. We therefore devised an algorithm, termed Bayesian Quantal Analysis (BQA), that can yield accurate estimates of the quantal parameters from data sets of as small a size as 60 observations for each of only 2 conditions of release probability. Computer simulations are used to compare its performance in accuracy with that of MPFA, while varying the number of observations and the simulated range in release probability. We challenge BQA with realistic complexities characteristic of complex synapses, such as increases in the intra- or intersite variances, and heterogeneity in release probabilities. Finally, we validate the method using experimental data obtained from electrophysiological recordings to show that the effect of an antagonist on postsynaptic receptors is correctly characterized by BQA by a specific reduction in the estimates of quantal size. Since BQA routinely yields reliable estimates of the quantal parameters from small data sets, it is ideally suited to identify the locus of synaptic plasticity for experiments in which repeated manipulations of the recording environment are unfeasible. PMID:23076101
Kashcheev, Valery V; Pryakhin, Evgeny A; Menyaylo, Alexander N; Chekin, Sergey Yu; Ivanov, Viktor K
2014-06-01
The current study has two aims: the first is to quantify the difference between radiation risks estimated with the use of organ or effective doses, particularly when planning pediatric and adult computed tomography (CT) examinations. The second aim is to determine the method of calculating organ doses and cancer risk using dose-length product (DLP) for typical routine CT examinations. In both cases, the radiation-induced cancer risks from medical CT examinations were evaluated as a function of gender and age. Lifetime attributable risk values from CT scanning were estimated with the use of ICRP (Publication 103) risk models and Russian national medical statistics data. For populations under the age of 50 y, the risk estimates based on organ doses usually are 30% higher than estimates based on effective doses. In older populations, the difference can be up to a factor of 2.5. The typical distributions of organ doses were defined for Chest Routine, Abdominal Routine, and Head Routine examinations. The distributions of organ doses were dependent on the anatomical region of scanning. The most exposed organs/tissues were thyroid, breast, esophagus, and lungs in cases of Chest Routine examination; liver, stomach, colon, ovaries, and bladder in cases of Abdominal Routine examination; and brain for Head Routine examinations. The conversion factors for calculation of typical organ doses or tissues at risk using DLP were determined. Lifetime attributable risk of cancer estimated with organ doses calculated from DLP was compared with the risk estimated on the basis of organ doses measured with the use of silicon photodiode dosimeters. The estimated difference in LAR is less than 29%.
Mc Hugh, N; Evans, R D; Amer, P R; Fahey, A G; Berry, D P
2011-01-01
Beef outputs from dairy farms make an important contribution to overall profitability in Irish dairy herds and are the sole source of revenue in many beef herds. The aim of this study was to estimate genetic parameters for animal BW and price across different stages of maturity. Data originated from 2 main sources: price and BW from livestock auctions and BW from on-farm weighings between 2000 and 2008. The data were divided into 4 distinct maturity categories: calves (n = 24,513), weanlings (n = 27,877), postweanlings (n = 23,279), and cows (n = 4,894). A univariate animal model used to estimate variance components was progressively built up to include a maternal genetic effect and a permanent environmental maternal effect. Bivariate analyses were used to estimate genetic covariances between BW and price per animal within and across maturity category. Direct heritability estimates for price per animal were 0.34 ± 0.03, 0.31 ± 0.05, 0.19 ± 0.04, and 0.10 ± 0.04 for calves, weanling, postweanlings, and cows, respectively. Direct heritability estimates for BW were 0.26 ± 0.03 for weanlings, 0.25 ± 0.04 for postweanlings, and 0.24 ± 0.06 for cows; no BW data were available on calves. Significant maternal genetic and maternal permanent environmental effects were observed for weanling BW only. The genetic correlation between price per animal and BW within each maturity group varied from 0.55 ± 0.06 (postweanling price and BW) to 0.91 ± 0.04 (cow price and BW). The availability of routinely collected data, along with the existence of ample genetic variation for animal BW and price per animal, facilitates their inclusion in Irish dairy and beef breeding objectives to better reflect the profitability of both enterprises.
Groundwater flow and transport modeling
Konikow, Leonard F.; Mercer, J.W.
1988-01-01
Deterministic, distributed-parameter, numerical simulation models for analyzing groundwater flow and transport problems have come to be used almost routinely during the past decade. A review of the theoretical basis and practical use of groundwater flow and solute transport models is used to illustrate the state-of-the-art. Because of errors and uncertainty in defining model parameters, models must be calibrated to obtain a best estimate of the parameters. For flow modeling, data generally are sufficient to allow calibration. For solute-transport modeling, lack of data not only limits calibration, but also causes uncertainty in process description. Where data are available, model reliability should be assessed on the basis of sensitivity tests and measures of goodness-of-fit. Some of these concepts are demonstrated by using two case histories. ?? 1988.
Forcing Regression through a Given Point Using Any Familiar Computational Routine.
1983-03-01
a linear model , Y =a + OX + e ( Model I) then adopt the principle of least squares; and use sample data to estimate the unknown parameters, a and 8...has an expected value of zero indicates that the "average" response is considered linear . If c varies widely, Model I, though conceptually correct, may...relationship is linear from the maximum observed x to x - a, then Model II should be used. To pro- ceed with the customary evaluation of Model I would be
NASA Technical Reports Server (NTRS)
Mullins, N. E.; Dao, N. C.; Martin, T. V.; Goad, C. C.; Boulware, N. L.; Chin, M. M.
1972-01-01
A computer program for executive control routine for orbit integration of artificial satellites is presented. At the beginning of each arc, the program initiates required constants as well as the variational partials at epoch. If epoch needs to be reset to a previous time, the program negates the stepsize, and calls for integration backward to the desired time. After backward integration is completed, the program resets the stepsize to the proper positive quantity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An Apple IIe microcomputer is being used to collect data and to control a pyrolysis system. Pyrolysis data for bitumen and kerogen are widely used to estimate source rock maturity. For a detailed analysis of kinetic parameters, however, data must be obtained more precisely than for routine pyrolysis. The authors discuss the program which controls the temperature ramp of the furnace that heats the sample, and collects data from a thermocouple in the furnace and from the flame ionization detector measuring evolved hydrocarbons. These data are stored on disk for later use by programs that display the results of themore » experiment or calculate kinetic parameters. The program is written in Applesoft BASIC with subroutines in Apple assembler for speed and efficiency.« less
Weiss, Christian; Zoubir, Abdelhak M
2017-05-01
We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.
Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minter, K.; Jannik, T.
LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© codemore » and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).« less
Flood Frequency Analysis With Historical and Paleoflood Information
NASA Astrophysics Data System (ADS)
Stedinger, Jery R.; Cohn, Timothy A.
1986-05-01
An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.
Analysis of life tables with grouping and withdrawals.
Lindley, D V
1979-09-01
A number of individuals is observed at the beginning of a period. At the end of the period the number is surviving, the number who have died and the number who have withdrawn are noted. From these three numbers it is required to estimate the death rate for the period. All relevant quantities are supposed independent and identically distributed for the individuals. The likelihood is calculated and found to depend on two parameters, other than the death rate, and to be unidenttifiable so that no consistent estimators exist. For large numbers, the posterior distribution of the death rate is approximated by a normal distribution whose mean is the root of a quadratic equation and whose variance is the sum of two terms; the first is proportional to the reciprocal of the number of individuals, as usually happens with a consistent estimator; the second does not tend to zero and depends on initial opinions about one of the nuisance parameters. The paper is a simple exercise in the routine use of coherent, Bayesian methodology. Numerical calucations illustrate the results.
A statistical model of diurnal variation in human growth hormone
NASA Technical Reports Server (NTRS)
Klerman, Elizabeth B.; Adler, Gail K.; Jin, Moonsoo; Maliszewski, Anne M.; Brown, Emery N.
2003-01-01
The diurnal pattern of growth hormone (GH) serum levels depends on the frequency and amplitude of GH secretory events, the kinetics of GH infusion into and clearance from the circulation, and the feedback of GH on its secretion. We present a two-dimensional linear differential equation model based on these physiological principles to describe GH diurnal patterns. The model characterizes the onset times of the secretory events, the secretory event amplitudes, as well as the infusion, clearance, and feedback half-lives of GH. We illustrate the model by using maximum likelihood methods to fit it to GH measurements collected in 12 normal, healthy women during 8 h of scheduled sleep and a 16-h circadian constant-routine protocol. We assess the importance of the model components by using parameter standard error estimates and Akaike's Information Criterion. During sleep, both the median infusion and clearance half-life estimates were 13.8 min, and the median number of secretory events was 2. During the constant routine, the median infusion half-life estimate was 12.6 min, the median clearance half-life estimate was 11.7 min, and the median number of secretory events was 5. The infusion and clearance half-life estimates and the number of secretory events are consistent with current published reports. Our model gave an excellent fit to each GH data series. Our analysis paradigm suggests an approach to decomposing GH diurnal patterns that can be used to characterize the physiological properties of this hormone under normal and pathological conditions.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.
A new model to estimate insulin resistance via clinical parameters in adults with type 1 diabetes.
Zheng, Xueying; Huang, Bin; Luo, Sihui; Yang, Daizhi; Bao, Wei; Li, Jin; Yao, Bin; Weng, Jianping; Yan, Jinhua
2017-05-01
Insulin resistance (IR) is a risk factor to assess the development of micro- and macro-vascular complications in type 1 diabetes (T1D). However, diabetes management in adults with T1D is limited by the difficulty of lacking simple and reliable methods to estimate insulin resistance. The aim of this study was to develop a new model to estimate IR via clinical parameters in adults with T1D. A total of 36 adults with adulthood onset T1D (n = 20) or childhood onset T1D (n = 16) were recruited by quota sampling. After an overnight insulin infusion to stabilize the blood glucose at 5.6 to 7.8 mmol/L, they underwent a 180-minute euglycemic-hyperinsulinemic clamp. Glucose disposal rate (GDR, mg kg -1 min -1 ) was calculated by data collected from the last 30 minutes during the test. Demographic factors (age, sex, and diabetes duration) and metabolic parameters (blood pressure, glycated hemoglobin A 1c [HbA 1c ], waist to hip ratio [WHR], and lipids) were collected to evaluate insulin resistance. Then, age at diabetes onset and clinical parameters were used to develop a model to estimate lnGDR by stepwise linear regression. From the stepwise process, a best model to estimate insulin resistance was generated, including HbA 1c , diastolic blood pressure, and WHR. Age at diabetes onset did not enter any of the models. We proposed the following new model to estimate IR as in GDR for adults with T1D: lnGDR = 4.964 - 0.121 × HbA 1c (%) - 0.012 × diastolic blood pressure (mmHg) - 1.409 × WHR, (adjusted R 2 = 0.616, P < .01). Insulin resistance in adults living with T1D can be estimated using routinely collected clinical parameters. This simple model provides a potential tool for estimating IR in large-scale epidemiological studies of adults with T1D regardless of age at onset. Copyright © 2016 John Wiley & Sons, Ltd.
Tsiatis, Anastasios A.; Davidian, Marie; Cao, Weihua
2010-01-01
Summary A routine challenge is that of making inference on parameters in a statistical model of interest from longitudinal data subject to drop out, which are a special case of the more general setting of monotonely coarsened data. Considerable recent attention has focused on doubly robust estimators, which in this context involve positing models for both the missingness (more generally, coarsening) mechanism and aspects of the distribution of the full data, that have the appealing property of yielding consistent inferences if only one of these models is correctly specified. Doubly robust estimators have been criticized for potentially disastrous performance when both of these models are even only mildly misspecified. We propose a doubly robust estimator applicable in general monotone coarsening problems that achieves comparable or improved performance relative to existing doubly robust methods, which we demonstrate via simulation studies and by application to data from an AIDS clinical trial. PMID:20731640
Bringing metabolic networks to life: convenience rate law and thermodynamic constraints
Liebermeister, Wolfram; Klipp, Edda
2006-01-01
Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669
Relative Pose Estimation Using Image Feature Triplets
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Rottensteiner, F.; Heipke, C.
2015-03-01
A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.
Crowdsourcing urban air temperatures through smartphone battery temperatures in São Paulo, Brazil
NASA Astrophysics Data System (ADS)
Droste, Arjan; Pape, Jan-Jaap; Overeem, Aart; Leijnse, Hidde; Steeneveld, Gert-Jan; Van Delden, Aarnout; Uijlenhoet, Remko
2017-04-01
Crowdsourcing as a method to obtain and apply vast datasets is rapidly becoming prominent in meteorology, especially for urban areas where traditional measurements are scarce. Earlier studies showed that smartphone battery temperature readings allow for estimating the daily and city-wide air temperature via a straightforward heat transfer model. This study advances these model estimations by studying spatially and temporally smaller scales. The accuracy of temperature retrievals as a function of the number of battery readings is also studied. An extensive dataset of over 10 million battery temperature readings is available for São Paulo (Brazil), for estimating hourly and daily air temperatures. The air temperature estimates are validated with air temperature measurements from a WMO station, an Urban Fluxnet site, and crowdsourced data from 7 hobby meteorologists' private weather stations. On a daily basis temperature estimates are good, and we show they improve by optimizing model parameters for neighbourhood scales as categorized in Local Climate Zones. Temperature differences between Local Climate Zones can be distinguished from smartphone battery temperatures. When validating the model for hourly temperature estimates, initial results are poor, but are vastly improved by using a diurnally varying parameter function in the heat transfer model rather than one fixed value for the entire day. The obtained results show the potential of large crowdsourced datasets in meteorological studies, and the value of smartphones as a measuring platform when routine observations are lacking.
Chong, Ka Chun; Zee, Benny Chung Ying; Wang, Maggie Haitian
2018-04-10
In an influenza pandemic, arrival times of cases are a proxy of the epidemic size and disease transmissibility. Because of intense surveillance of travelers from infected countries, detection is more rapid and complete than on local surveillance. Travel information can provide a more reliable estimation of transmission parameters. We developed an Approximate Bayesian Computation algorithm to estimate the basic reproduction number (R 0 ) in addition to the reporting rate and unobserved epidemic start time, utilizing travel, and routine surveillance data in an influenza pandemic. A simulation was conducted to assess the sampling uncertainty. The estimation approach was further applied to the 2009 influenza A/H1N1 pandemic in Mexico as a case study. In the simulations, we showed that the estimation approach was valid and reliable in different simulation settings. We also found estimates of R 0 and the reporting rate to be 1.37 (95% Credible Interval [CI]: 1.26-1.42) and 4.9% (95% CI: 0.1%-18%), respectively, in the 2009 influenza pandemic in Mexico, which were robust to variations in the fixed parameters. The estimated R 0 was consistent with that in the literature. This method is useful for officials to obtain reliable estimates of disease transmissibility for strategic planning. We suggest that improvements to the flow of reporting for confirmed cases among patients arriving at different countries are required. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
NASA Astrophysics Data System (ADS)
Alekseychik, P. K.; Korrensalo, A.; Mammarella, I.; Vesala, T.; Tuittila, E.-S.
2017-06-01
Leaf area index (LAI) is an important parameter in natural ecosystems, representing the seasonal development of vegetation and photosynthetic potential. However, direct measurement techniques require labor-intensive field campaigns that are usually limited in time, while remote sensing approaches often do not yield reliable estimates. Here we propose that the bulk LAI of sedges (LAIs) can be estimated alternatively from a micrometeorological parameter, the aerodynamic roughness length for momentum (z0). z0 can be readily calculated from high-response turbulence and other meteorological data, typically measured continuously and routinely available at ecosystem research sites. The regressions of LAI versus z0 were obtained using the data from two Finnish natural sites representative of boreal fen and bog ecosystems. LAIs was found to be well correlated with z0 and sedge canopy height. Superior method performance was demonstrated in the fen ecosystem where the sedges make a bigger contribution to overall surface roughness than in bogs.
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a search space for equations. The parametrization of the transfer functions is then achieved through a second optimization routine. The contribution explores different aspects of the described procedure through a set of experiments. These experiments can be divided into three categories: (1) The inference of transfer functions from directly measurable parameters; (2) The estimation of global parameters for given transfer functions from runoff data; and (3) The estimation of sets of completely unknown transfer functions from runoff data. The conducted tests reveal different potentials and limits of the procedure. In concrete it is shown that example (1) and (2) work remarkably well. Example (3) is much more dependent on the setup. In general, it can be said that in that case much more data is needed to derive transfer function estimations, even for simple models and setups. References: - Chomsky, N. (1956): Three Models for the Description of Language. IT IRETr. 2(3), p 113-124 - O'Neil, M. (2001): Grammatical Evolution. IEEE ToEC, Vol.5, No. 4 - Samaniego, L.; Kumar, R.; Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. WWR, Vol. 46, W05523, doi:10.1029/2008WR007327
Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John
2002-05-20
We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.
NASA Astrophysics Data System (ADS)
Ma, Ning; Zhang, Yinsheng; Xu, Chong-Yu; Szilagyi, Jozsef
2015-08-01
Quantitative estimation of actual evapotranspiration (ETa) by in situ measurements and mathematical modeling is a fundamental task for physical understanding of ETa as well as the feedback mechanisms between land and the ambient atmosphere. However, the ETa information in the Tibetan Plateau (TP) has been greatly impeded by the extremely sparse ground observation network in the region. Approaches for estimating ETa solely from routine meteorological variables are therefore important for investigating spatiotemporal variations of ETa in the data-scarce region of the TP. Motivated by this need, the complementary relationship (CR) and Penman-Monteith approaches were evaluated against in situ measurements of ETa on a daily basis in an alpine steppe region of the TP. The former includes the Nonlinear Complementary Relationship (Nonlinear-CR) as well as the Complementary Relationship Areal Evapotranspiration (CRAE) models, while the latter involves the Katerji-Perrier and the Todorovic models. Results indicate that the Nonlinear-CR, CRAE, and Katerji-Perrier models are all capable of efficiently simulating daily ETa, provided their parameter values were appropriately calibrated. The Katerji-Perrier model performed best since its site-specific parameters take the soil water status into account. The Nonlinear-CR model also performed well with the advantage of not requiring the user to choose between a symmetric and asymmetric CR. The CRAE model, even with a relatively low Nash-Sutcliffe efficiency (NSE) value, is also an acceptable approach in this data-scarce region as it does not need information of wind speed and ground surface conditions. In contrast, application of the Todorovic model was found to be inappropriate in the dry regions of the TP due to its significant overestimation of ETa as it neglects the effect of water stress on the bulk surface resistance. Sensitivity analysis of the parameter values demonstrated the relative importance of each parameter in the corresponding model. Overall, the Nonlinear-CR model is recommended in the absence of measured ETa for local calibration of the model parameter values.
Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C
2012-06-15
Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Parihar, Sarita; Tripathi, Richik; Parihar, Ajit Vikram; Samadi, Fahad M; Chandra, Akhilesh; Bhavsar, Neeta
2016-01-01
This study was designed to assess the reliability of blood glucose level estimation in gingival crevicular blood(GCB) for screening diabetes mellitus. 70 patients were included in study. A randomized, double-blind clinical trial was performed. Among these, 39 patients were diabetic (including 4 patients who were diagnosed during the study) and rest 31 patients were non-diabetic. GCB obtained during routine periodontal examination was analyzed by glucometer to know blood glucose level. The same patient underwent for finger stick blood (FSB) glucose level estimation with glucometer and venous blood (VB) glucose level with standardized laboratory method as per American Diabetes Association Guidelines. 1 All the three blood glucose levels were compared. Periodontal parameters were also recorded including gingival index (GI) and probing pocket depth (PPD). A strong positive correlation ( r ) was observed between glucose levels of GCB with FSB and VB with the values of 0.986 and 0.972 in diabetic group and 0.820 and 0.721 in non-diabetic group. As well, the mean values of GI and PPD were more in diabetic group than non-diabetic group with the statistically significant difference ( p < 0.005). GCB can be reliably used to measure the blood glucose level as the values were closest to glucose levels estimated by VB. The technique is safe, easy to perform and non-invasive to the patient and can increase the frequency of diagnosing diabetes during routine periodontal therapy.
Conlan, Andrew J. K.; Line, John E.; Hiett, Kelli; Coward, Chris; Van Diemen, Pauline M.; Stevens, Mark P.; Jones, Michael A.; Gog, Julia R.; Maskell, Duncan J.
2011-01-01
Dose–response experiments characterize the relationship between infectious agents and their hosts. These experiments are routinely used to estimate the minimum effective infectious dose for an infectious agent, which is most commonly characterized by the dose at which 50 per cent of challenged hosts become infected—the ID50. In turn, the ID50 is often used to compare between different agents and quantify the effect of treatment regimes. The statistical analysis of dose–response data typically makes the assumption that hosts within a given dose group are independent. For social animals, in particular avian species, hosts are routinely housed together in groups during experimental studies. For experiments with non-infectious agents, this poses no practical or theoretical problems. However, transmission of infectious agents between co-housed animals will modify the observed dose–response relationship with implications for the estimation of the ID50 and the comparison between different agents and treatments. We derive a simple correction to the likelihood for standard dose–response models that allows us to estimate dose–response and transmission parameters simultaneously. We use this model to show that: transmission between co-housed animals reduces the apparent value of the ID50 and increases the variability between replicates leading to a distinctive all-or-nothing response; in terms of the total number of animals used, individual housing is always the most efficient experimental design for ascertaining dose–response relationships; estimates of transmission from previously published experimental data for Campylobacter spp. in chickens suggest that considerable transmission occurred, greatly increasing the uncertainty in the estimates of dose–response parameters reported in the literature. Furthermore, we demonstrate that accounting for transmission in the analysis of dose–response data for Campylobacter spp. challenges our current understanding of the differing response of chickens with respect to host-age and in vivo passage of bacteria. Our findings suggest that the age-dependence of transmissibility between hosts—rather than their susceptibility to colonization—is the mechanism behind the ‘lag-phase’ reported in commercial flocks, which are typically found to be Campylobacter free for the first 14–21 days of life. PMID:21593028
Beda, Alessandro; Güldner, Andreas; Carvalho, Alysson R; Zin, Walter Araujo; Carvalho, Nadja C; Huhle, Robert; Giannella-Neto, Antonio; Koch, Thea; de Abreu, Marcelo Gama
2014-01-01
Measuring esophageal pressure (Pes) using an air-filled balloon catheter (BC) is the common approach to estimate pleural pressure and related parameters. However, Pes is not routinely measured in mechanically ventilated patients, partly due to technical and practical limitations and difficulties. This study aimed at comparing the conventional BC with two alternative methods for Pes measurement, liquid-filled and air-filled catheters without balloon (LFC and AFC), during mechanical ventilation with and without spontaneous breathing activity. Seven female juvenile pigs (32-42 kg) were anesthetized, orotracheally intubated, and a bundle of an AFC, LFC, and BC was inserted in the esophagus. Controlled and assisted mechanical ventilation were applied with positive end-expiratory pressures of 5 and 15 cmH2O, and driving pressures of 10 and 20 cmH2O, in supine and lateral decubitus. Cardiogenic noise in BC tracings was much larger (up to 25% of total power of Pes signal) than in AFC and LFC (<3%). Lung and chest wall elastance, pressure-time product, inspiratory work of breathing, inspiratory change and end-expiratory value of transpulmonary pressure were estimated. The three catheters allowed detecting similar changes in these parameters between different ventilation settings. However, a non-negligible and significant bias between estimates from BC and those from AFC and LFC was observed in several instances. In anesthetized and mechanically ventilated pigs, the three catheters are equivalent when the aim is to detect changes in Pes and related parameters between different conditions, but possibly not when the absolute value of the estimated parameters is of paramount importance. Due to a better signal-to-noise ratio, and considering its practical advantages in terms of easier calibration and simpler acquisition setup, LFC may prove interesting for clinical use.
SPOTting Model Parameters Using a Ready-Made Python Package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2017-04-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package.
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783
Estimating the cost-effectiveness of vaccination against herpes zoster in England and Wales.
van Hoek, A J; Gay, N; Melegaro, A; Opstelten, W; Edmunds, W J
2009-02-25
A live-attenuated vaccine against herpes zoster (HZ) has been approved for use, on the basis of a large-scale clinical trial that suggests that the vaccine is safe and efficacious. This study uses a Markov cohort model to estimate whether routine vaccination of the elderly (60+) would be cost-effective, when compared with other uses of health care resources. Vaccine efficacy parameters are estimated by fitting a model to clinical trial data. Estimates of QALY losses due to acute HZ and post-herpetic neuralgia were derived by fitting models to data on the duration of pain by severity and the QoL detriment associated with different severity categories, as reported in a number of different studies. Other parameters (such as cost and incidence estimates) were based on the literature, or UK data sources. The results suggest that vaccination of 65 year olds is likely to be cost-effective (base-case ICER=pound20,400 per QALY gained). If the vaccine does offer additional protection against either the severity of disease or the likelihood of developing PHN (as suggested by the clinical trial), then vaccination of all elderly age groups is highly likely to be deemed cost-effective. Vaccination at either 65 or 70 years (depending on assumptions of the vaccine action) is most cost-effective. Including a booster dose at a later age is unlikely to be cost-effective.
Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T
2016-05-01
We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Hyper-X Post-Flight Trajectory Reconstruction
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Tartabini, Paul V.; Blanchard, RobertC.; Kirsch, Michael; Toniolo, Matthew D.
2004-01-01
This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X{43A/Hyper{X high speed research vehicle, and its implementation for the reconstruction and analysis of ight test data. Extended Kalman ltering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the ltering routines. Additionally, smoothing algorithms have been implemented in which the nal value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from ight data.
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
Serum bilirubin: a simple routine surrogate marker of the progression of chronic kidney disease.
Moolchandani, K; Priyadarssini, M; Rajappa, M; Parameswaran, S; Revathy, G
2016-10-01
Studies suggest that Chronic Kidney Disease (CKD) is a global burden health associated with significant comorbid conditions. Few biochemical parameters have gained significance in predicting the disease progression. The present work aimed to study the association of the simple biochemical parameter of serum bilirubin level with the estimated glomerular filtration rate (eGFR), and to assess their association with the co-morbid conditions in CKD. We recruited 188 patients with CKD who attended a Nephrology out-patient department. eGFR values were calculated based on the serum creatinine levels using CKD-EPI formula. Various biochemical parameters including glucose, creatinine, uric acid, total and direct bilirubin were assayed in all study subjects. Study subjects were categorized into subgroups based on their eGFR values and their diabetic status and the parameters were compared among the different subgroups. We observed a significantly decreased serum bilirubin levels (p < 0.001) in patients with lower eGFR values, compared to those with higher eGFR levels. There was a significant positive correlation between the eGFR levels and the total bilirubin levels (r = 0.92). We also observed a significant positive correlation between the eGFR levels and the direct bilirubin levels (r = 0.76). On multivariate linear regression analysis, we found that total and direct bilirubin independently predict eGFR, after adjusting for potential confounders (p < 0.001). Our results suggest that there is significant hypobilirubinemia in CKD, especially with increasing severity and co-existing diabetes mellitus. This finding has importance in the clinical setting, as assay of simple routine biochemical parameters such as serum bilirubin may help in predicting the early progression of CKD and more so in diabetic CKD.
[Evaluation of pendulum testing of spasticity].
Le Cavorzin, P; Hernot, X; Bartier, O; Carrault, G; Chagneau, F; Gallien, P; Allain, H; Rochcongar, P
2002-11-01
To identify valid measurements of spasticity derived from the pendulum test of the leg in a representative population of spastic patients. Pendulum testing was performed in 15 spastic and 10 matched healthy subjects. The reflex-mediated torque evoked in quadriceps femoris, as well as muscle mechanical parameters (viscosity and elasticity), were calculated using mathematical modelling. Correlation with the two main measures derived from the pendulum test reported in the literature (the Relaxation Index and the area under the curve) was calculated in order to select the most valid. Among mechanical parameters, only viscosity was found to be significantly higher in the spastic group. As expected, the computed integral of the reflex-mediated torque was found to be larger in spastics than in healthy subjects. A significant non-linear (logarithmic) correlation was found between the clinically-assessed muscle spasticity (Ashworth grading) and the computed reflex-mediated torque, emphasising the non-linear behaviour of this scale. Among measurements derived from the pendulum test which are proposed in the literature for routine estimation of spasticity, the Relaxation Index exhibited an unsuitable U-shaped pattern of variation with increasing reflex-mediated torque. On the opposite, the area under the curve revealed a linear regression, which is more convenient for routine estimation of spasticity. The pendulum test of the leg is a simple technique for the assessment of spastic hypertonia. However, the measurement generally used in the literature (the Relaxation Index) exhibits serious limitations, and would benefit to be replaced by more valid measures, such as the area under the goniometric curve, especially for the assessment of therapeutics.
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Rapid impact testing for quantitative assessment of large populations of bridges
NASA Astrophysics Data System (ADS)
Zhou, Yun; Prader, John; DeVitis, John; Deal, Adrienne; Zhang, Jian; Moon, Franklin; Aktan, A. Emin
2011-04-01
Although the widely acknowledged shortcomings of visual inspection have fueled significant advances in the areas of non-destructive evaluation and structural health monitoring (SHM) over the last several decades, the actual practice of bridge assessment has remained largely unchanged. The authors believe the lack of adoption, especially of SHM technologies, is related to the 'single structure' scenarios that drive most research. To overcome this, the authors have developed a concept for a rapid single-input, multiple-output (SIMO) impact testing device that will be capable of capturing modal parameters and estimating flexibility/deflection basins of common highway bridges during routine inspections. The device is composed of a trailer-mounted impact source (capable of delivering a 50 kip impact) and retractable sensor arms, and will be controlled by an automated data acquisition, processing and modal parameter estimation software. The research presented in this paper covers (a) the theoretical basis for SISO, SIMO and MIMO impact testing to estimate flexibility, (b) proof of concept numerical studies using a finite element model, and (c) a pilot implementation on an operating highway bridge. Results indicate that the proposed approach can estimate modal flexibility within a few percent of static flexibility; however, the estimated modal flexibility matrix is only reliable for the substructures associated with the various SIMO tests. To overcome this shortcoming, a modal 'stitching' approach for substructure integration to estimate the full Eigen vector matrix is developed, and preliminary results of these methods are also presented.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Mountain, James E.; Santer, Peter; O’Neill, David P.; Smith, Nicholas M. J.; Ciaffoni, Luca; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Whiteley, Jonathan P.
2018-01-01
Inhomogeneity in the lung impairs gas exchange and can be an early marker of lung disease. We hypothesized that highly precise measurements of gas exchange contain sufficient information to quantify many aspects of the inhomogeneity noninvasively. Our aim was to explore whether one parameterization of lung inhomogeneity could both fit such data and provide reliable parameter estimates. A mathematical model of gas exchange in an inhomogeneous lung was developed, containing inhomogeneity parameters for compliance, vascular conductance, and dead space, all relative to lung volume. Inputs were respiratory flow, cardiac output, and the inspiratory and pulmonary arterial gas compositions. Outputs were expiratory and pulmonary venous gas compositions. All values were specified every 10 ms. Some parameters were set to physiologically plausible values. To estimate the remaining unknown parameters and inputs, the model was embedded within a nonlinear estimation routine to minimize the deviations between model and data for CO2, O2, and N2 flows during expiration. Three groups, each of six individuals, were studied: young (20–30 yr); old (70–80 yr); and patients with mild to moderate chronic obstructive pulmonary disease (COPD). Each participant undertook a 15-min measurement protocol six times. For all parameters reflecting inhomogeneity, highly significant differences were found between the three participant groups (P < 0.001, ANOVA). Intraclass correlation coefficients were 0.96, 0.99, and 0.94 for the parameters reflecting inhomogeneity in deadspace, compliance, and vascular conductance, respectively. We conclude that, for the particular participants selected, highly repeatable estimates for parameters reflecting inhomogeneity could be obtained from noninvasive measurements of respiratory gas exchange. NEW & NOTEWORTHY This study describes a new method, based on highly precise measures of gas exchange, that quantifies three distributions that are intrinsic to the lung. These distributions represent three fundamentally different types of inhomogeneity that together give rise to ventilation-perfusion mismatch and result in impaired gas exchange. The measurement technique has potentially broad clinical applicability because it is simple for both patient and operator, it does not involve ionizing radiation, and it is completely noninvasive. PMID:29074714
Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?
Sweet, Julia; Brzezinski, Mark A.; McNair, Heather M.; Passow, Uta
2016-01-01
Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth <15 m by routinely sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification. PMID:27893739
Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?
Jones, Jonathan M; Sweet, Julia; Brzezinski, Mark A; McNair, Heather M; Passow, Uta
2016-01-01
Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth <15 m by routinely sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification.
Evaluating the impact of the HIV pandemic on measles control and elimination.
Helfand, Rita F; Moss, William J; Harpaz, Rafael; Scott, Susana; Cutts, Felicity
2005-05-01
To estimate the impact of the HIV pandemic on vaccine-acquired population immunity to measles virus because high levels of population immunity are required to eliminate transmission of measles virus in large geographical areas, and HIV infection can reduce the efficacy of measles vaccination. A literature review was conducted to estimate key parameters relating to the potential impact of HIV infection on the epidemiology of measles in sub-Saharan Africa; parameters included the prevalence of HIV, child mortality, perinatal HIV transmission rates and protective immune responses to measles vaccination. These parameter estimates were incorporated into a simple model, applicable to regions that have a high prevalence of HIV, to estimate the potential impact of HIV infection on population immunity against measles. The model suggests that the HIV pandemic should not introduce an insurmountable barrier to measles control and elimination, in part because higher rates of primary and secondary vaccine failure among HIV-infected children are counteracted by their high mortality rate. The HIV pandemic could result in a 2-3% increase in the proportion of the birth cohort susceptible to measles, and more frequent supplemental immunization activities (SIAs) may be necessary to control or eliminate measles. In the model the optimal interval between SIAs was most influenced by the coverage rate for routine measles vaccination. The absence of a second opportunity for vaccination resulted in the greatest increase in the number of susceptible children. These results help explain the initial success of measles elimination efforts in southern Africa, where measles control has been achieved in a setting of high HIV prevalence.
NASA Astrophysics Data System (ADS)
Jia, S.; Kim, S. H.; Nghiem, S. V.; Kafatos, M.
2017-12-01
Live fuel moisture (LFM) is the water content of live herbaceous plants expressed as a percentage of the oven-dry weight of plant. It is a critical parameter in fire ignition in Mediterranean climate and routinely measured in sites selected by fire agencies across the U.S. Vegetation growing cycle, meteorological metrics, soil type, and topography all contribute to the seasonal and inter-annual variation of LFM, and therefore, the risk of wildfire. The optical remote sensing-based vegetation indices (VIs) have been used to estimate the LFM. Comparing to the VIs, microwave remote sensing products have advantages like less saturation effect in greenness and representing the water content of the vegetation cover. In this study, we established three models to evaluate the predictability of LFM in Southern California using MODIS NDVI, vegetation temperature condition index (VTCI) from downscaled Soil Moisture Active Passive (SMAP) products, and vegetation optical depth (VOD) derived by Land Parameter Retrieval Model. Other ancillary variables, such as topographic factors (aspects and slope) and meteorological metrics (air temperature, precipitation, and relative humidity), are also considered in the models. The model results revealed an improvement of LFM estimation from SMAP products and VOD, despite the uncertainties introduced in the downscaling and parameter retrieval. The estimation of LFM using remote sensing data can provide an assessment of wildfire danger better than current methods using NDVI-based growing seasonal index. Future study will test the VOD estimation from SMAP data using the multi-temporal dual channel algorithm (MT-DCA) and extend the LFM modeling to a regional scale.
Jeurissen, Ben; Leemans, Alexander; Sijbers, Jan
2014-10-01
Ensuring one is using the correct gradient orientations in a diffusion MRI study can be a challenging task. As different scanners, file formats and processing tools use different coordinate frame conventions, in practice, users can end up with improperly oriented gradient orientations. Using such wrongly oriented gradient orientations for subsequent diffusion parameter estimation will invalidate all rotationally variant parameters and fiber tractography results. While large misalignments can be detected by visual inspection, small rotations of the gradient table (e.g. due to angulation of the acquisition plane), are much more difficult to detect. In this work, we propose an automated method to align the coordinate frame of the gradient orientations with that of the corresponding diffusion weighted images, using a metric based on whole brain fiber tractography. By transforming the gradient table and measuring the average fiber trajectory length, we search for the transformation that results in the best global 'connectivity'. To ensure a fast calculation of the metric we included a range of algorithmic optimizations in our tractography routine. To make the optimization routine robust to spurious local maxima, we use a stochastic optimization routine that selects a random set of seed points on each evaluation. Using simulations, we show that our method can recover the correct gradient orientations with high accuracy and precision. In addition, we demonstrate that our technique can successfully recover rotated gradient tables on a wide range of clinically realistic data sets. As such, our method provides a practical and robust solution to an often overlooked pitfall in the processing of diffusion MRI. Copyright © 2014 Elsevier B.V. All rights reserved.
Derivation of hydrous pyrolysis kinetic parameters from open-system pyrolysis
NASA Astrophysics Data System (ADS)
Tseng, Yu-Hsin; Huang, Wuu-Liang
2010-05-01
Kinetic information is essential to predict the temperature, timing or depth of hydrocarbon generation within a hydrocarbon system. The most common experiments for deriving kinetic parameters are mainly by open-system pyrolysis. However, it has been shown that the conditions of open-system pyrolysis are deviant from nature by its low near-ambient pressure and high temperatures. Also, the extrapolation of heating rates in open-system pyrolysis to geological conditions may be questionable. Recent study of Lewan and Ruble shows hydrous-pyrolysis conditions can simulate the natural conditions better and its applications are supported by two case studies with natural thermal-burial histories. Nevertheless, performing hydrous pyrolysis experiment is really tedious and requires large amount of sample, while open-system pyrolysis is rather convenient and efficient. Therefore, the present study aims at the derivation of convincing distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data. Our results unveil that there is a good correlation between open-system Rock-Eval parameter Tmax and the activation energy (Ea) derived from hydrous pyrolysis. The hydrous pyrolysis single Ea can be predicted from Tmax based on the correlation, while the frequency factor (A0) is estimated based on the linear relationship between single Ea and log A0. Because the Ea distribution is more rational than single Ea, we modify the predicted single hydrous pyrolysis Ea into distributed Ea by shifting the pattern of Ea distribution from open-system pyrolysis until the weight mean Ea distribution equals to the single hydrous pyrolysis Ea. Moreover, it has been shown that the shape of the Ea distribution is very much alike the shape of Tmax curve. Thus, in case of the absence of open-system Ea distribution, we may use the shape of Tmax curve to get the distributed hydrous pyrolysis Ea. The study offers a new approach as a simple method for obtaining distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data, which will allow for better estimating hydrocarbon generation.
Energy expenditure estimation during daily military routine with body-fixed sensors.
Wyss, Thomas; Mäder, Urs
2011-05-01
The purpose of this study was to develop and validate an algorithm for estimating energy expenditure during the daily military routine on the basis of data collected using body-fixed sensors. First, 8 volunteers completed isolated physical activities according to an established protocol, and the resulting data were used to develop activity-class-specific multiple linear regressions for physical activity energy expenditure on the basis of hip acceleration, heart rate, and body mass as independent variables. Second, the validity of these linear regressions was tested during the daily military routine using indirect calorimetry (n = 12). Volunteers' mean estimated energy expenditure did not significantly differ from the energy expenditure measured with indirect calorimetry (p = 0.898, 95% confidence interval = -1.97 to 1.75 kJ/min). We conclude that the developed activity-class-specific multiple linear regressions applied to the acceleration and heart rate data allow estimation of energy expenditure in 1-minute intervals during daily military routine, with accuracy equal to indirect calorimetry.
Maximum-Likelihood Parameter Estimation of a Generalized Gumbel Distribution
1989-03-24
Calculate the roots of g(.) and ht .) IERR = 0 CALL R0UTG( XROOT ,YROUT,IEHRJ IF (LERR.NE.0) GOTO 900 Calculate the value of b TEMP1 = OLOG( YROOT) - PSI(YRGOT...TEMP2 = XBAR - XROOT OVAL = TEMP2 / TEMP1 RLAM = XROOT - DSHIF1 ELSE C Scale 8VAL to match HOOTH routine IF (XC(NCL).GT.70.D0) THEN ALPHA - XC NCL...ICNT) 10 CONTINUE RETURN END c C *ROOTG( XROOT ,YROOT,IERR) C 31 Aug 88 C *Res. Dir., C E Hall, Jr. SUBROUTINE ROOTG( XRDOT , ROOT ,IERRJ IMPLICIT DOUBLE
Parameter Estimation and Model Validation of Nonlinear Dynamical Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abarbanel, Henry; Gill, Philip
In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.
Sibling species in Montastraea annularis, coral bleaching, and the coral climate record
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knowlton, N.; Weil, E.; Weigt, L.A.
1992-01-17
Measures of growth and skeletal isotopic ratios in the Caribbean coral Montastraea annularis are fundamental to many studies of paleoceanography, environmental degradation, and global climate change. This taxon is shown to consist of at least three sibling species in shallow water. The two most commonly studied of these show highly significant differences in growth rate and oxygen isotopic ratios, parameters routinely used to estimate past climatic conditions; unusual coloration in the third may have confused research on coral bleaching. Interpretation or comparison of past and current studies can be jeopardized by ignoring these species boundaries.
Quantitative Determination of Spring Water Quality Parameters via Electronic Tongue.
Carbó, Noèlia; López Carrero, Javier; Garcia-Castillo, F Javier; Tormos, Isabel; Olivas, Estela; Folch, Elisa; Alcañiz Fillol, Miguel; Soto, Juan; Martínez-Máñez, Ramón; Martínez-Bisbal, M Carmen
2017-12-25
The use of a voltammetric electronic tongue for the quantitative analysis of quality parameters in spring water is proposed here. The electronic voltammetric tongue consisted of a set of four noble electrodes (iridium, rhodium, platinum, and gold) housed inside a stainless steel cylinder. These noble metals have a high durability and are not demanding for maintenance, features required for the development of future automated equipment. A pulse voltammetry study was conducted in 83 spring water samples to determine concentrations of nitrate (range: 6.9-115 mg/L), sulfate (32-472 mg/L), fluoride (0.08-0.26 mg/L), chloride (17-190 mg/L), and sodium (11-94 mg/L) as well as pH (7.3-7.8). These parameters were also determined by routine analytical methods in spring water samples. A partial least squares (PLS) analysis was run to obtain a model to predict these parameter. Orthogonal signal correction (OSC) was applied in the preprocessing step. Calibration (67%) and validation (33%) sets were selected randomly. The electronic tongue showed good predictive power to determine the concentrations of nitrate, sulfate, chloride, and sodium as well as pH and displayed a lower R² and slope in the validation set for fluoride. Nitrate and fluoride concentrations were estimated with errors lower than 15%, whereas chloride, sulfate, and sodium concentrations as well as pH were estimated with errors below 10%.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Mijac, Dragana D; Janković, Goran L J; Jorga, Jagoda; Krstić, Miodrag N
2010-08-01
Malnutrition is a common feature of inflammatory bowel disease (IBD). There are numerous methods for the assessment of nutritional status, but the gold standard has not yet been established. The aims of the study were to estimate the prevalence of undernutrition and to evaluate methods for routine nutritional assessment of active IBD patients. Twenty-three patients with active Crohn disease, 53 patients with active ulcerative colitis and 30 controls were included in the study. The nutritional status was assessed by extensive anthropometric measurements, percentage of weight loss in the past 1-6 months and biochemical markers of nutrition. All investigated nutritional parameters were significantly different in IBD patients compared to control subjects, except MCV, tryglicerides and serum total protein level. Serum albumin level and body mass index (BMI) were the most predictive parameters of malnutrition. According to different assessment methods the prevalence of undernutrition and severe undernutrition in patients with active IBD were 25.0%-69.7% and 1.3%-31.6%, respectively, while in the control subjects no abnormalities have been detected. There was no statistically significant difference of nutritional parameters between UC and CD patients except lower mid-arm muscle circumference in UC group. Malnutrition is common in IBD patients. BMI and serum albumin are simple and convenient methods for the assessment of the nutritional status in IBD patients. Further studies with larger group of patients are necessary to elucidate the prevalence of malnutrition and the most accurate assessment methods in IBD patients.
Low-voltage chest CT: another way to reduce the radiation dose in asbestos-exposed patients.
Macía-Suárez, D; Sánchez-Rodríguez, E; Lopez-Calviño, B; Diego, C; Pombar, M
2017-09-01
To assess whether low voltage chest computed tomography (CT) can be used to successfully diagnose disease in patients with asbestos exposure. Fifty-six former employees of the shipbuilding industry, who were candidates to receive a standard-dose chest CT due to their occupational exposure to asbestos, underwent a routine CT. Immediately after this initial CT, they underwent a second acquisition using low-dose chest CT parameters, based on a low potential (80 kV) and limited tube current. The findings of the two CT protocols were compared based on typical diseases associated with asbestos exposure. The kappa coefficient for each parameter and for an overall rating (grouping them based on mediastinal, pleural, and pulmonary findings) were calculated in order to test for correlations between the two protocols. A good correlation between routine and low-dose CT was demonstrated for most parameters with a mean radiation dose reduction of up to 83% of the effective dose based on the dose-length product between protocols. Low-dose chest CT, based on a limited tube potential, is useful for patients with an asbestos exposure background. Low-dose chest CT can be successfully used to minimise the radiation dose received by patients, as this protocol produced an estimated mean effective dose similar to that of an abdominal or pelvis plain film. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Meta-Analysis of Rare Binary Adverse Event Data
Bhaumik, Dulal K.; Amatya, Anup; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D.
2013-01-01
We examine the use of fixed-effects and random-effects moment-based meta-analytic methods for analysis of binary adverse event data. Special attention is paid to the case of rare adverse events which are commonly encountered in routine practice. We study estimation of model parameters and between-study heterogeneity. In addition, we examine traditional approaches to hypothesis testing of the average treatment effect and detection of the heterogeneity of treatment effect across studies. We derive three new methods, simple (unweighted) average treatment effect estimator, a new heterogeneity estimator, and a parametric bootstrapping test for heterogeneity. We then study the statistical properties of both the traditional and new methods via simulation. We find that in general, moment-based estimators of combined treatment effects and heterogeneity are biased and the degree of bias is proportional to the rarity of the event under study. The new methods eliminate much, but not all of this bias. The various estimators and hypothesis testing methods are then compared and contrasted using an example dataset on treatment of stable coronary artery disease. PMID:23734068
Estimation of postmortem interval through albumin in CSF by simple dye binding method.
Parmar, Ankita K; Menon, Shobhana K
2015-12-01
Estimation of postmortem interval is a very important question in some medicolegal investigations. For the precise estimation of postmortem interval, there is a need of a method which can give accurate estimation. Bromocresol green (BCG) is a simple dye binding method and widely used in routine practice. Application of this method in forensic practice may bring revolutionary changes. In this study, cerebrospinal fluid was aspirated from cisternal puncture from 100 autopsies. A study was carried out on concentration of albumin with respect to postmortem interval. After death, albumin present in CSF undergoes changes, after 72 h of death, concentration of albumin has become 0.012 mM, and this decrease was linear from 2 h to 72 h. An important relationship was found between albumin concentration and postmortem interval with an error of ± 1-4h. The study concludes that CSF albumin can be a useful and significant parameter in estimation of postmortem interval. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Broadband spectral fitting of blazars using XSPEC
NASA Astrophysics Data System (ADS)
Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev
2018-03-01
The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
iTOUGH2 Universal Optimization Using the PEST Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.A.
2010-07-01
iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less
Parameter Heterogeneity In Breast Cancer Cost Regressions – Evidence From Five European Countries
Banks, Helen; Campbell, Harry; Douglas, Anne; Fletcher, Eilidh; McCallum, Alison; Moger, Tron Anders; Peltola, Mikko; Sveréus, Sofia; Wild, Sarah; Williams, Linda J.; Forbes, John
2015-01-01
Abstract We investigate parameter heterogeneity in breast cancer 1‐year cumulative hospital costs across five European countries as part of the EuroHOPE project. The paper aims to explore whether conditional mean effects provide a suitable representation of the national variation in hospital costs. A cohort of patients with a primary diagnosis of invasive breast cancer (ICD‐9 codes 174 and ICD‐10 C50 codes) is derived using routinely collected individual breast cancer data from Finland, the metropolitan area of Turin (Italy), Norway, Scotland and Sweden. Conditional mean effects are estimated by ordinary least squares for each country, and quantile regressions are used to explore heterogeneity across the conditional quantile distribution. Point estimates based on conditional mean effects provide a good approximation of treatment response for some key demographic and diagnostic specific variables (e.g. age and ICD‐10 diagnosis) across the conditional quantile distribution. For many policy variables of interest, however, there is considerable evidence of parameter heterogeneity that is concealed if decisions are based solely on conditional mean results. The use of quantile regression methods reinforce the need to consider beyond an average effect given the greater recognition that breast cancer is a complex disease reflecting patient heterogeneity. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd. PMID:26633866
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
Nolte, S; Mierke, A; Fischer, H F; Rose, M
2016-06-01
Significant life events such as severe health status changes or intensive medical treatment often trigger response shifts in individuals that may hamper the comparison of measurements over time. Drawing from the Oort model, this study aims at detecting response shift at the item level in psychosomatic inpatients and evaluating its impact on the validity of comparing repeated measurements. Complete pretest and posttest data were available from 1188 patients who had filled out the ICD-10 Symptom Rating (ISR) scale at admission and discharge, on average 24 days after intake. Reconceptualization, reprioritization, and recalibration response shifts were explored applying tests of measurement invariance. In the item-level approach, all model parameters were constrained to be equal between pretest and posttest. If non-invariance was detected, these were linked to the different types of response shift. When constraining across-occasion model parameters, model fit worsened as indicated by a significant Satorra-Bentler Chi-square difference test suggesting potential presence of response shifts. A close examination revealed presence of two types of response shift, i.e., (non)uniform recalibration and both higher- and lower-level reconceptualization response shifts leading to four model adjustments. Our analyses suggest that psychosomatic inpatients experienced some response shifts during their hospital stay. According to the hierarchy of measurement invariance, however, only one of the detected non-invariances is critical for unbiased mean comparisons over time, which did not have a substantial impact on estimating change. Hence, the use of the ISR can be recommended for outcomes assessment in clinical routine, as change score estimates do not seem hampered by response shift effects.
NASA Astrophysics Data System (ADS)
Ojeda, GermáN. Y.; Whitman, Dean
2002-11-01
The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.
Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager
Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.
2012-01-01
GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.
1973-01-01
Geodetic parameters describing the earth's gravity field and the positions of satellite-tracking stations in a geocentric reference frame were computed. These parameters were estimated by means of a combination of five different types of data: routine and simultaneous satellite observations, observations of deep-space probes, measurements of terrestrial gravity, and surface-triangulation data. The combination gives better parameters than does any subset of data types. The dynamic solution used precision-reduced Baker-Nunn observations and laser range data of 25 satellites. Data from the 49-station National Oceanic and Atmospheric Administration BC-4 network, the 19-station Smithsonian Astrophysical Observatory Baker-Nunn network, and independent camera stations were employed in the geometrical solution. Data from the tracking of deep-space probes were converted to relative longitudes and distances to the earth's axis of rotation of the tracking stations. Surface-gravity data in the form of 550-km squares were derived from 19,328 1 deg X 1 deg mean gravity anomalies.
McDevitt, Joseph L; Acosta-Torres, Stefany; Zhang, Ning; Hu, Tianshen; Odu, Ayobami; Wang, Jijia; Xi, Yin; Lamus, Daniel; Miller, David S; Pillai, Anil K
2017-07-01
To estimate the least costly routine exchange frequency for percutaneous nephrostomies (PCNs) placed for malignant urinary obstruction, as measured by annual hospital charges, and to estimate the financial impact of patient compliance. Patients with PCNs placed for malignant urinary obstruction were studied from 2011 to 2013. Exchanges were classified as routine or due to 1 of 3 complication types: mechanical (tube dislodgment), obstruction, or infection. Representative cases were identified, and median representative charges were used as inputs for the model. Accelerated failure time and Markov chain Monte Carlo models were used to estimate distribution of exchange types and annual hospital charges under different routine exchange frequency and compliance scenarios. Long-term PCN management was required in 57 patients, with 87 total exchange encounters. Median representative hospital charges for pyelonephritis and obstruction were 11.8 and 9.3 times greater, respectively, than a routine exchange. The projected proportion of routine exchanges increased and the projected proportion of infection-related exchanges decreased when moving from a 90-day exchange with 50% compliance to a 60-day exchange with 75% compliance, and this was associated with a projected reduction in annual charges. Projected cost reductions resulting from increased compliance were generally greater than reductions resulting from changes in exchange frequency. This simulation model suggests that the optimal routine exchange interval for PCN exchange in patients with malignant urinary obstruction is approximately 60 days and that the degree of reduction in charges likely depends more on patient compliance than exact exchange interval. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Localization of transient gravitational wave sources: beyond triangulation
NASA Astrophysics Data System (ADS)
Fairhurst, Stephen
2018-05-01
Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.
On the relationship between land surface infrared emissivity and soil moisture
NASA Astrophysics Data System (ADS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu
2018-01-01
The relationship between surface infrared (IR) emissivity and soil moisture content has been investigated based on satellite measurements. Surface soil moisture content can be estimated by IR remote sensing, namely using the surface parameters of IR emissivity, temperature, vegetation coverage, and soil texture. It is possible to separate IR emissivity from other parameters affecting surface soil moisture estimation. The main objective of this paper is to examine the correlation between land surface IR emissivity and soil moisture. To this end, we have developed a simple yet effective scheme to estimate volumetric soil moisture (VSM) using IR land surface emissivity retrieved from satellite IR spectral radiance measurements, assuming those other parameters impacting the radiative transfer (e.g., temperature, vegetation coverage, and surface roughness) are known for an acceptable time and space reference location. This scheme is applied to a decade of global IR emissivity data retrieved from MetOp-A infrared atmospheric sounding interferometer measurements. The VSM estimated from these IR emissivity data (denoted as IR-VSM) is used to demonstrate its measurement-to-measurement variations. Representative 0.25-deg spatially-gridded monthly-mean IR-VSM global datasets are then assembled to compare with those routinely provided from satellite microwave (MW) multisensor measurements (denoted as MW-VSM), demonstrating VSM spatial variations as well as seasonal-cycles and interannual variability. Initial positive agreement is shown to exist between IR- and MW-VSM (i.e., R2 = 0.85). IR land surface emissivity contains surface water content information. So, when IR measurements are used to estimate soil moisture, this correlation produces results that correspond with those customarily achievable from MW measurements. A decade-long monthly-gridded emissivity atlas is used to estimate IR-VSM, to demonstrate its seasonal-cycle and interannual variation, which is spatially coherent and consistent with that from MW measurements, and, moreover, to achieve our objective of investigating the relationship between land surface IR emissivity and soil moisture.
Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation
NASA Astrophysics Data System (ADS)
Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.
2016-12-01
Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.
Ekwunife, Obinna I; Lhachimi, Stefan K
2017-12-08
World Health Organisation recommends routine Human Papilloma Virus (HPV) vaccination for girls when its cost-effectiveness in the country or region has been duly considered. We therefore aimed to evaluate cost-effectiveness of HPV vaccination in Nigeria using pragmatic parameter estimates for cost and programme coverage, i.e. realistically achievable in the studied context. A microsimulation frame-work was used. The natural history for cervical cancer disease was remodelled from a previous Nigerian model-based study. Costing was based on health providers' perspective. Disability adjusted life years attributable to cervical cancer mortality served as benefit estimate. Suitable policy option was obtained by calculating the incremental costs-effectiveness ratio. Probabilistic sensitivity analysis was used to assess parameter uncertainty. One-way sensitivity analysis was used to explore the robustness of the policy recommendation to key parameters alteration. Expected value of perfect information (EVPI) was calculated to determine the expected opportunity cost associated with choosing the optimal scenario or strategy at the maximum cost-effectiveness threshold. Combination of the current scenario of opportunistic screening and national HPV vaccination programme (CS + NV) was the only cost-effective and robust policy option. However, CS + NV scenario was only cost-effective so far the unit cost of HPV vaccine did not exceed $5. EVPI analysis showed that it may be worthwhile to conduct additional research to inform the decision to adopt CS + NV. National HPV vaccination combined with opportunist cervical cancer screening is cost-effective in Nigeria. However, adoption of this strategy should depend on its relative efficiency when compared to other competing new vaccines and health interventions.
NASA Astrophysics Data System (ADS)
D'Amboise, Christopher J. L.; Müller, Karsten; Oxarango, Laurent; Morin, Samuel; Schuler, Thomas V.
2017-09-01
We present a new water percolation routine added to the one-dimensional snowpack model Crocus as an alternative to the empirical bucket routine. This routine solves the Richards equation, which describes flow of water through unsaturated porous snow governed by capillary suction, gravity and hydraulic conductivity of the snow layers. We tested the Richards routine on two data sets, one recorded from an automatic weather station over the winter of 2013-2014 at Filefjell, Norway, and the other an idealized synthetic data set. Model results using the Richards routine generally lead to higher water contents in the snow layers. Snow layers often reached a point at which the ice crystals' surface area is completely covered by a thin film of water (the transition between pendular and funicular regimes), at which feedback from the snow metamorphism and compaction routines are expected to be nonlinear. With the synthetic simulation 18 % of snow layers obtained a saturation of > 10 % and 0.57 % of layers reached saturation of > 15 %. The Richards routine had a maximum liquid water content of 173.6 kg m-3 whereas the bucket routine had a maximum of 42.1 kg m-3. We found that wet-snow processes, such as wet-snow metamorphism and wet-snow compaction rates, are not accurately represented at higher water contents. These routines feed back on the Richards routines, which rely heavily on grain size and snow density. The parameter sets for the water retention curve and hydraulic conductivity of snow layers, which are used in the Richards routine, do not represent all the snow types that can be found in a natural snowpack. We show that the new routine has been implemented in the Crocus model, but due to feedback amplification and parameter uncertainties, meaningful applicability is limited. Updating or adapting other routines in Crocus, specifically the snow compaction routine and the grain metamorphism routine, is needed before Crocus can accurately simulate the snowpack using the Richards routine.
Alomari, Ali Hamed; Wille, Marie-Luise; Langton, Christian M
2018-02-01
Conventional mechanical testing is the 'gold standard' for assessing the stiffness (N mm -1 ) and strength (MPa) of bone, although it is not applicable in-vivo since it is inherently invasive and destructive. The mechanical integrity of a bone is determined by its quantity and quality; being related primarily to bone density and structure respectively. Several non-destructive, non-invasive, in-vivo techniques have been developed and clinically implemented to estimate bone density, both areal (dual-energy X-ray absorptiometry (DXA)) and volumetric (quantitative computed tomography (QCT)). Quantitative ultrasound (QUS) parameters of velocity and attenuation are dependent upon both bone quantity and bone quality, although it has not been possible to date to transpose one particular QUS parameter into separate estimates of quantity and quality. It has recently been shown that ultrasound transit time spectroscopy (UTTS) may provide an accurate estimate of bone density and hence quantity. We hypothesised that UTTS also has the potential to provide an estimate of bone structure and hence quality. In this in-vitro study, 16 human femoral bone samples were tested utilising three techniques; UTTS, micro computed tomography (μCT), and mechanical testing. UTTS was utilised to estimate bone volume fraction (BV/TV) and two novel structural parameters, inter-quartile range of the derived transit time (UTTS-IQR) and the transit time of maximum proportion of sonic-rays (TTMP). μCT was utilised to derive BV/TV along with several bone structure parameters. A destructive mechanical test was utilised to measure the stiffness and strength (failure load) of the bone samples. BV/TV was calculated from the derived transit time spectrum (TTS); the correlation coefficient (R 2 ) with μCT-BV/TV was 0.885. For predicting mechanical stiffness and strength, BV/TV derived by both μCT and UTTS provided the strongest correlation with mechanical stiffness (R 2 =0.567 and 0.618 respectively) and mechanical strength (R 2 =0.747 and 0.736 respectively). When respective structural parameters were incorporated to BV/TV, multiple regression analysis indicated that none of the μCT histomorphometric parameters could improve the prediction of mechanical stiffness and strength, while for UTTS, adding TTMP to BV/TV increased the prediction of mechanical stiffness to R 2 =0.711 and strength to R 2 =0.827. It is therefore envisaged that UTTS may have the ability to estimate BV/TV along with providing an improved prediction of osteoporotic fracture risk, within routine clinical practice in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity
NASA Astrophysics Data System (ADS)
Jian, P.; Hung, S.; Tseng, T.
2013-12-01
Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid positions is implemented and inspected for improving source parameter determination of regional seismicity in Taiwan. Synthetic inversion tests demonstrate the resolved moment tensors would better match the hypothetical CMT solutions, and tend to suppress unreal non-double-couple components and reduce the trade-off between focal mechanism and centroid depth if individual signal-to-noise ratios and correlation lengths for 3-component seismograms at each station and mislocation uncertainties are properly taken into account. We further testify the capability of our scheme in retrieving the robust CMT information for mid-sized (Mw~3.5) and offshore earthquakes in Taiwan, which offers immediate and broad applications in detailed modelling of regional stress field and deformation pattern and mapping of subsurface velocity structures.
2013-01-01
Introduction Fibrinogen plays a key role in hemostasis and is the first coagulation factor to reach critical levels in massively bleeding trauma patients. Consequently, rapid estimation of plasma fibrinogen (FIB) is essential upon emergency room (ER) admission, but is not part of routine coagulation monitoring in many centers. We investigated the predictive ability of the laboratory parameters hemoglobin (Hb) and base excess (BE) upon admission, as well as the Injury Severity Score (ISS), to estimate FIB in major trauma patients. Methods In this retrospective study, major trauma patients (ISS ≥16) with documented FIB analysis upon ER admission were eligible for inclusion. FIB was correlated with Hb, BE and ISS, alone and in combination, using regression analysis. Results A total of 675 patients were enrolled (median ISS 27). FIB upon admission correlated strongly with Hb, BE and ISS. Multiple regression analysis showed that Hb and BE together predicted FIB (adjusted R2 = 0.46; loge(FIB) = 3.567 + 0.223.Hb - 0.007.Hb2 + 0.044.BE), and predictive strength increased when ISS was included (adjusted R2 = 0.51; loge(FIB) = 4.188 + 0.243.Hb - 0.008.Hb2 + 0.036.BE - 0.031.ISS + 0.0003.ISS2). Of all major trauma patients admitted with Hb <12 g/dL, 74% had low (<200 mg/dL) FIB and 54% had critical (<150 mg/dL) FIB. Of patients admitted with Hb <10 g/dL, 89% had low FIB and 73% had critical FIB. These values increased to 93% and 89%, respectively, among patients with an admission Hb <8 g/dL. Sixty-six percent of patients with only a weakly negative BE (<−2 mmol/L) showed low FIB. Of patients with BE <−6 mmol/L upon admission, 81% had low FIB and 63% had critical FIB. The corresponding values for BE <−10 mmol/L were 89% and 78%, respectively. Conclusions Upon ER admission, FIB of major trauma patients shows strong correlation with rapidly obtainable, routine laboratory parameters such as Hb and BE. These two parameters might provide an insightful and rapid tool to identify major trauma patients at risk of acquired hypofibrinogenemia. Early calculation of ISS could further increase the ability to predict FIB in these patients. We propose that FIB can be estimated during the initial phase of trauma care based on bedside tests. PMID:23849249
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Malcolm; Roy, Timothy; Tessier, Frederic
Purpose: To develop the techniques required to experimentally determine electron stopping powers for application in primary standards and dosimetry protocols. Method and Materials: A large-volume HPGe detector system (>80% efficiency) was commissioned for the measurement of high energy (5–35 MeV) electron beams. As a proof of principle the system was used with a Y-90/Sr-90 radioactive source. Thin plates of absorbing material (< 0.1 gcm-2) were then placed between the source and detector and the emerging electron spectrum was acquired. The full experimental geometry was modelled using the EGSnrc package to validate the detector design, optimize the experimental setup and comparemore » measured and calculated spectra. Results: The biggest challenge using a beta source was to identify a robust spectral parameter to determine for each measurement. An end-point-fitting routine was used to determine the maximum energy, Emax, of the beta spectrum for each absorber thickness t. The parameter dEmax/dt is related to the electron stopping power and the same routine was applied to both measured and simulated spectra. Although the standard uncertainty in dEmax/dt was of the order of 5 %, by taking the ratio of measured and Monte Carlo values for dEmax/dt the uncertainty of the fitting routine was eliminated and the uncertainty was reduced to less than 2 %. The agreement between measurement and simulation was within this uncertainty estimate. Conclusion: The investigation confirmed the experimental approach and demonstrated that EGSnrc could accurately determine correction factors that will be required for the final measurement setup in a linac beam.« less
pureS2HAT: S 2HAT-based Pure E/B Harmonic Transforms
NASA Astrophysics Data System (ADS)
Grain, J.; Stompor, R.; Tristram, M.
2011-10-01
The pS2HAT routines allow efficient, parallel calculation of the so-called 'pure' polarized multipoles. The computed multipole coefficients are equal to the standard pseudo-multipoles calculated for the apodized sky maps of the Stokes parameters Q and U subsequently corrected by so-called counterterms. If the applied apodizations fullfill certain boundary conditions, these multipoles correspond to the pure multipoles. Pure multipoles of one type, i.e., either E or B, are ensured not to contain contributions from the other one, at least to within numerical artifacts. They can be therefore further used in the estimation of the sky power spectra via the pseudo power spectrum technique, which has to however correctly account for the applied apodization on the one hand, and the presence of the counterterms, on the other. In addition, the package contains the routines permitting calculation of the spin-weighted apodizations, given an input scalar, i.e., spin-0 window. The former are needed to compute the counterterms. It also provides routines for maps and window manipulations. The routines are written in C and based on the S2HAT library, which is used to perform all required spherical harmonic transforms as well as all inter-processor communication. They are therefore parallelized using MPI and follow the distributed-memory computational model. The data distribution patterns, pixelization choices, conventions etc are all as those assumed/allowed by the S2HAT library.
Hyper-X Mach 10 Trajectory Reconstruction
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Martin, John G.; Tartabini, Paul V.; Thornblom, Mark N.
2005-01-01
This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X-43A/Hyper-X high speed research vehicle, and its implementation for the reconstruction and analysis of flight test data. Extended Kalman filtering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the filtering routines. Additionally, smoothing algorithms have been implemented in which the final value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from data obtained during the Mach 10 test flight, which occurred on November 16th 2004.
Carvalho, Alysson R.; Zin, Walter Araujo; Carvalho, Nadja C.; Huhle, Robert; Giannella-Neto, Antonio; Koch, Thea; de Abreu, Marcelo Gama
2014-01-01
Background Measuring esophageal pressure (Pes) using an air-filled balloon catheter (BC) is the common approach to estimate pleural pressure and related parameters. However, Pes is not routinely measured in mechanically ventilated patients, partly due to technical and practical limitations and difficulties. This study aimed at comparing the conventional BC with two alternative methods for Pes measurement, liquid-filled and air-filled catheters without balloon (LFC and AFC), during mechanical ventilation with and without spontaneous breathing activity. Seven female juvenile pigs (32–42 kg) were anesthetized, orotracheally intubated, and a bundle of an AFC, LFC, and BC was inserted in the esophagus. Controlled and assisted mechanical ventilation were applied with positive end-expiratory pressures of 5 and 15 cmH2O, and driving pressures of 10 and 20 cmH2O, in supine and lateral decubitus. Main Results Cardiogenic noise in BC tracings was much larger (up to 25% of total power of Pes signal) than in AFC and LFC (<3%). Lung and chest wall elastance, pressure-time product, inspiratory work of breathing, inspiratory change and end-expiratory value of transpulmonary pressure were estimated. The three catheters allowed detecting similar changes in these parameters between different ventilation settings. However, a non-negligible and significant bias between estimates from BC and those from AFC and LFC was observed in several instances. Conclusions In anesthetized and mechanically ventilated pigs, the three catheters are equivalent when the aim is to detect changes in Pes and related parameters between different conditions, but possibly not when the absolute value of the estimated parameters is of paramount importance. Due to a better signal-to-noise ratio, and considering its practical advantages in terms of easier calibration and simpler acquisition setup, LFC may prove interesting for clinical use. PMID:25247308
An analytical approach to test and design upper limb prosthesis.
Veer, Karan
2015-01-01
In this work the signal acquiring technique, the analysis models and the design protocols of the prosthesis are discussed. The different methods to estimate the motion intended by the amputee from surface electromyogram (SEMG) signals based on time and frequency domain parameters are presented. The experiment proposed that the used techniques can help significantly in discriminating the amputee's motions among four independent activities using dual channel set-up. Further, based on experimental results, the design and working of an artificial arm have been covered under two constituents--the electronics design and the mechanical assembly. Finally, the developed hand prosthesis allows the amputated persons to perform daily routine activities easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, L; Lambert, C; Nyiri, B
Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of upmore » to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.« less
Anderson, E T; Stoskopf, M K; Morris, J A; Clarke, E O; Harms, C A
2010-12-01
The red lionfish Pterois volitans is important not only in the aquarium trade but also as an invasive species in the western Atlantic. Introduced to waters off the southeastern coast of the United States, red lionfish have rapidly spread along much of the East Coast and throughout Bermuda, the Bahamas, and much of the Caribbean. Hematology and plasma biochemistry were evaluated in red lionfish captured from the offshore waters of North Carolina to establish baseline parameters for individual and population health assessment. Blood smears were evaluated for total and differential white blood cell counts, and routine clinical biochemical profiles were performed on plasma samples. To improve the interpretive value of routine plasma biochemistry profiles, tissue enzyme activities (alkaline phosphatase [ALP], alanine aminotransferase [ALT], aspartate aminotransferase [AST], gamma-glutamyl transferase [GGT], lactate dehydrogenase [LD], and creatine kinase [CK]) were analyzed from liver, kidney, skeletal muscle, gastrointestinal tract, and heart tissues from five fish. The hematological and plasma biochemical values were similar to those of other marine teleosts except that the estimated white blood cell counts were much lower than those routinely found in many species. The tissue enzyme activity findings suggest that plasma LD, CK, and AST offer clinical relevance in the assessment of red lionfish.
Quantitative Determination of Spring Water Quality Parameters via Electronic Tongue
Carbó, Noèlia; López Carrero, Javier; Garcia-Castillo, F. Javier; Olivas, Estela; Folch, Elisa; Alcañiz Fillol, Miguel; Soto, Juan
2017-01-01
The use of a voltammetric electronic tongue for the quantitative analysis of quality parameters in spring water is proposed here. The electronic voltammetric tongue consisted of a set of four noble electrodes (iridium, rhodium, platinum, and gold) housed inside a stainless steel cylinder. These noble metals have a high durability and are not demanding for maintenance, features required for the development of future automated equipment. A pulse voltammetry study was conducted in 83 spring water samples to determine concentrations of nitrate (range: 6.9–115 mg/L), sulfate (32–472 mg/L), fluoride (0.08–0.26 mg/L), chloride (17–190 mg/L), and sodium (11–94 mg/L) as well as pH (7.3–7.8). These parameters were also determined by routine analytical methods in spring water samples. A partial least squares (PLS) analysis was run to obtain a model to predict these parameter. Orthogonal signal correction (OSC) was applied in the preprocessing step. Calibration (67%) and validation (33%) sets were selected randomly. The electronic tongue showed good predictive power to determine the concentrations of nitrate, sulfate, chloride, and sodium as well as pH and displayed a lower R2 and slope in the validation set for fluoride. Nitrate and fluoride concentrations were estimated with errors lower than 15%, whereas chloride, sulfate, and sodium concentrations as well as pH were estimated with errors below 10%. PMID:29295592
Whitfield, Kathryn; Kelly, Heath
2002-01-01
OBJECTIVE: To estimate the incidence and the completeness of ascertainment of acute flaccid paralysis (AFP) in Victoria, Australia, in 1998-2000 and to determine its common causes among children aged under 15 years. METHODS:: The two-source capture-recapture method was used to estimate the incidence of cases of AFP and to evaluate case ascertainment in the routine surveillance system. The primary and secondary data sources were notifications from this system and inpatient hospital records, respectively. FINDINGS: The routine surveillance system indicated that there were 14 cases and the hospital record review identified 19 additional cases. According to the two-source capture-recapture method, there would have been 40 cases during this period (95% confidence interval (CI) = 29-51), representing an average annual incidence of 1.4 per 100000 children aged under 15 years (95% CI = 1.1- 1.7). Thus case ascertainment based on routine surveillance was estimated to be 35% complete. Guillain-Barré syndrome was the commonest single cause of AFP. CONCLUSIONS: Routine surveillance for AFP in Victoria was insensitive. A literature review indicated that the capture-recapture estimates obtained in this study were plausible. The present results help to define a target notification rate for surveillance in settings where poliomyelitis is not endemic. PMID:12481205
Robust versus consistent variance estimators in marginal structural Cox models.
Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris
2018-06-11
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.
Fernández-San-Martín, Maria Isabel; Martín-López, Luis Miguel; Masa-Font, Roser; Olona-Tabueña, Noemí; Roman, Yuani; Martin-Royo, Jaume; Oller-Canet, Silvia; González-Tejón, Susana; San-Emeterio, Luisa; Barroso-Garcia, Albert; Viñas-Cabrera, Lidia; Flores-Mateo, Gemma
2014-01-01
Patients with severe mental illness have higher prevalences of cardiovascular risk factors (CRF). The objective is to determine whether interventions to modify lifestyles in these patients reduce anthropometric and analytical parameters related to CRF in comparison to routine clinical practice. Systematic review of controlled clinical trials with lifestyle intervention in Medline, Cochrane Library, Embase, PsycINFO and CINALH. Change in body mass index, waist circumference, cholesterol, triglycerides and blood sugar. Meta-analyses were performed using random effects models to estimate the weighted mean difference. Heterogeneity was determined using i(2) statistical and subgroups analyses. 26 studies were selected. Lifestyle interventions decrease anthropometric and analytical parameters at 3 months follow up. At 6 and 12 months, the differences between the intervention and control groups were maintained, although with less precision. More studies with larger samples and long-term follow-up are needed.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
Bettencourt da Silva, Ricardo J N
2016-04-01
The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
A mathematical model of diurnal variations in human plasma melatonin levels
NASA Technical Reports Server (NTRS)
Brown, E. N.; Choe, Y.; Shanahan, T. L.; Czeisler, C. A.
1997-01-01
Studies in animals and humans suggest that the diurnal pattern in plasma melatonin levels is due to the hormone's rates of synthesis, circulatory infusion and clearance, circadian control of synthesis onset and offset, environmental lighting conditions, and error in the melatonin immunoassay. A two-dimensional linear differential equation model of the hormone is formulated and is used to analyze plasma melatonin levels in 18 normal healthy male subjects during a constant routine. Recently developed Bayesian statistical procedures are used to incorporate correctly the magnitude of the immunoassay error into the analysis. The estimated parameters [median (range)] were clearance half-life of 23.67 (14.79-59.93) min, synthesis onset time of 2206 (1940-0029), synthesis offset time of 0621 (0246-0817), and maximum N-acetyltransferase activity of 7.17(2.34-17.93) pmol x l(-1) x min(-1). All were in good agreement with values from previous reports. The difference between synthesis offset time and the phase of the core temperature minimum was 1 h 15 min (-4 h 38 min-2 h 43 min). The correlation between synthesis onset and the dim light melatonin onset was 0.93. Our model provides a more physiologically plausible estimate of the melatonin synthesis onset time than that given by the dim light melatonin onset and the first reliable means of estimating the phase of synthesis offset. Our analysis shows that the circadian and pharmacokinetics parameters of melatonin can be reliably estimated from a single model.
Gap-filling methods to impute eddy covariance flux data by preserving variance.
NASA Astrophysics Data System (ADS)
Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.
2015-12-01
To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables of the parameters traditionally used. Then, comparisons of the predicted values from these methods and 'traditional' gap-filling methods (using 12 fixed monthly windows) will be assessed to show the scale of preserving variance. Further, this method will be applied to impute artificially created gaps for analyzing if variance is preserved.
Engineering description of the ascent/descent bet product
NASA Technical Reports Server (NTRS)
Seacord, A. W., II
1986-01-01
The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.
Earth's magnetic field effect on MUF calculation and consequences for hmF2 trend estimates
NASA Astrophysics Data System (ADS)
Elias, Ana G.; Zossi, Bruno S.; Yiğit, Erdal; Saavedra, Zenon; de Haro Barbas, Blas F.
2017-10-01
Knowledge of the state of the upper atmosphere, and in particular of the ionosphere, is essential in several applications such as systems used in radio frequency communications, satellite positioning and navigation. In general, these systems depend on the state and evolution of the ionosphere. In all applications involving the ionosphere an essential task is to determine the path and modifications of ray propagation through the ionospheric plasma. The ionospheric refractive index and the maximum usable frequency (MUF) that can be received over a given distance are some key parameters that are crucial for such technological applications. However, currently the representation of these parameters are in general simplified, neglecting the effects of Earth's magnetic field. The value of M(3000)F2, related to the MUF that can be received over 3000 km is routinely scaled from ionograms using a technique which also neglects the geomagnetic field effects assuming a standard simplified propagation model. M(3000)F2 is expected to be affected by a systematic trend linked to the secular variations of Earth's magnetic field. On the other hand, among the upper atmospheric effects expected from increasing greenhouse gases concentration is the lowering of the F2-layer peak density height, hmF2. This ionospheric parameter is usually estimated using the M(3000)F2 factor, so it would also carry this ;systematic trend;. In this study, the geomagnetic field effect on MUF estimations is analyzed as well as its impact on hmF2 long-term trend estimations. We find that M(3000)F2 increases when the geomagnetic field is included in its calculation, and hence hmF2, estimated using existing methods involving no magnetic field for M(3000)F2 scaling, would present a weak but steady trend linked to these variations which would increase or compensate the few kilometers decrease ( 2 km per decade) expected from greenhouse gases effect.
An Open Source modular platform for hydrological model implementation
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Bruland, Oddbjørn
2010-05-01
An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not: Write or compile computer code, handle file IO for each modules, • Routine implementation and testing. Implementation of new process-simulating methods/equations, specialised objective functions or quality control routines, testing of these in an existing framework. o Need not: Implement user or model interface for the new routine, IO handling, administration of model setup and run, calibration and validation routines etc. From being developed for Norway's largest hydropower producer Statkraft, ENKI is now being turned into an Open Source project. At the time of writing, the licence and the project administration is not established. Also, it remains to port the application to other compilers and computer platforms. However, we hope that ENKI will prove useful for both academic and operational users.
Mahmmod, Yasser S; Toft, Nils; Katholm, Jørgen; Grønbæk, Carsten; Klaas, Ilka C
2013-11-01
Danish farmers can order a real-time PCR mastitis diagnostic test on routinely taken cow-level samples from milk recordings. Validation of its performance in comparison to conventional mastitis diagnostics under field conditions is essential for efficient control of intramammary infections (IMI) with Staphylococcus aureus (S. aureus). Therefore, the objective of this study was to estimate the sensitivity (Se) and specificity (Sp) of real-time PCR, bacterial culture (BC) and California mastitis test (CMT) for the diagnosis of the naturally occurring IMI with S. aureus in routinely collected milk samples using latent class analysis (LCA) to avoid the assumption of a perfect reference test. Using systematic random sampling, a total of 609 lactating dairy cows were selected from 6 dairy herds with bulk tank milk PCR cycle threshold (Ct) value ≤39 for S. aureus. At routine milk recordings, automatically obtained cow-level (composite) milk samples were analyzed by PCR and at the same milking, 2436 quarter milk samples were collected aseptically for BC and CMT. Results showed that 140 cows (23%) were positive for S. aureus IMI by BC while 170 cows (28%) were positive by PCR. Estimates of Se and Sp for PCR were higher than test estimates of BC and CMT. SeCMT was higher than SeBC however, SpBC was higher than SpCMT. SePCR was 91%, while SeBC was 53%, and SeCMT was 61%. SpPCR was 99%, while SpBC was 89%, and SpCMT was 65%. In conclusion, PCR has a higher performance than the conventional diagnostic tests (BC and CMT) suggesting its usefulness as a routine test for accurate diagnosis of S. aureus IMI from dairy cows at routine milk recordings. The use of LCA provided estimates of the test characteristics for two currently diagnostic tests (BC, CMT) and a novel technique (real-time PCR) for diagnosing S. aureus IMI under field conditions at routine milk recordings in Denmark. Copyright © 2013 Elsevier B.V. All rights reserved.
Miller, Alison L.; Song, Ju-Hyun; Sturza, Julie; Lumeng, Julie C.; Rosenblum, Katherine; Kaciroti, Niko; Vazquez, Delia M.
2018-01-01
Biological and social influences both shape emotion regulation. In 380 low-income children, we tested whether biological stress profile (cortisol) moderated the association among positive and negative home environment factors (routines; chaos) and emotion regulation (negative lability; positive regulation). Children (M age = 50.6, SD = 6.4 months) provided saliva samples to assess diurnal cortisol parameters across 3 days. Parents reported on home environment and child emotion regulation. Structural equation modeling was used to test whether cortisol parameters moderated associations between home environment and child emotion regulation. Results showed that home chaos was negatively associated with emotion regulation outcomes; cortisol did not moderate the association. Child cortisol level moderated the routines-emotion regulation association such that lack of routine was most strongly associated with poor emotion regulation among children with lower cortisol output. Findings suggest that underlying child stress biology may shape response to environmental influences. PMID:27594200
Anderman, Evan R.; Hill, Mary Catherine
2001-01-01
Observations of the advective component of contaminant transport in steady-state flow fields can provide important information for the calibration of ground-water flow models. This report documents the Advective-Transport Observation (ADV2) Package, version 2, which allows advective-transport observations to be used in the three-dimensional ground-water flow parameter-estimation model MODFLOW-2000. The ADV2 Package is compatible with some of the features in the Layer-Property Flow and Hydrogeologic-Unit Flow Packages, but is not compatible with the Block-Centered Flow or Generalized Finite-Difference Packages. The particle-tracking routine used in the ADV2 Package duplicates the semi-analytical method of MODPATH, as shown in a sample problem. Particles can be tracked in a forward or backward direction, and effects such as retardation can be simulated through manipulation of the effective-porosity value used to calculate velocity. Particles can be discharged at cells that are considered to be weak sinks, in which the sink applied does not capture all the water flowing into the cell, using one of two criteria: (1) if there is any outflow to a boundary condition such as a well or surface-water feature, or (2) if the outflow exceeds a user specified fraction of the cell budget. Although effective porosity could be included as a parameter in the regression, this capability is not included in this package. The weighted sum-of-squares objective function, which is minimized in the Parameter-Estimation Process, was augmented to include the square of the weighted x-, y-, and z-components of the differences between the simulated and observed advective-front locations at defined times, thereby including the direction of travel as well as the overall travel distance in the calibration process. The sensitivities of the particle movement to the parameters needed to minimize the objective function are calculated for any particle location using the exact sensitivity-equation approach; the equations are derived by taking the partial derivatives of the semi-analytical particle-tracking equation with respect to the parameters. The ADV2 Package is verified by showing that parameter estimation using advective-transport observations produces the true parameter values in a small but complicated test case when exact observations are used. To demonstrate how the ADV2 Package can be used in practice, a field application is presented. In this application, the ADV2 Package is used first in the Sensitivity-Analysis mode of MODFLOW-2000 to calculate measures of the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Cape Cod, Massachusetts. The ADV2 Package is then used in the Parameter-Estimation mode of MODFLOW-2000 to determine best-fit parameter values. It is concluded that, for this problem, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and the use of formal parameter-estimation methods and related techniques produced significant insight into the physical system.
Granger causality for state-space models
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Seth, Anil K.
2015-04-01
Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
A Brief Survey of Modern Optimization for Statisticians
Lange, Kenneth; Chi, Eric C.; Zhou, Hua
2014-01-01
Modern computational statistics is turning more and more to high-dimensional optimization to handle the deluge of big data. Once a model is formulated, its parameters can be estimated by optimization. Because model parsimony is important, models routinely include nondifferentiable penalty terms such as the lasso. This sober reality complicates minimization and maximization. Our broad survey stresses a few important principles in algorithm design. Rather than view these principles in isolation, it is more productive to mix and match them. A few well chosen examples illustrate this point. Algorithm derivation is also emphasized, and theory is downplayed, particularly the abstractions of the convex calculus. Thus, our survey should be useful and accessible to a broad audience. PMID:25242858
Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata
2014-09-01
Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.
AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION
NASA Technical Reports Server (NTRS)
Yam, Y.
1994-01-01
The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is typically done in ground structural testing, AU-FREDI identifies only the key transfer function parameters and uncertainty bounds that are necessary for on-line design and tuning of robust controllers. AU-FREDI's system identification algorithms are independent of the JPL-LSCL environment, and can easily be extracted and modified for use with input/output data files. The basic approach of AU-FREDI's system identification algorithms is to non-parametrically identify the sampled data in the frequency domain using either stochastic or sine-dwell input, and then to obtain a parametric model of the transfer function by curve-fitting techniques. A cross-spectral analysis of the output error is used to determine the additive uncertainty in the estimated transfer function. The nominal transfer function estimate and the estimate of the associated additive uncertainty can be used for robust control analysis and design. AU-FREDI's I/O data transfer routines are tailored to the environment of the CALTECH/ JPL-LSCL which included a special operating system to interface with the testbed. Input commands for a particular experiment (wideband, narrowband, or sine-dwell) were computed on-line and then issued to respective actuators by the operating system. The operating system also took measurements through displacement sensors and passed them back to the software for storage and off-line processing. In order to make use of AU-FREDI's I/O data transfer routines, a user would need to provide an operating system capable of overseeing such functions between the software and the experimental setup at hand. The program documentation contains information designed to support users in either providing such an operating system or modifying the system identification algorithms for use with input/output data files. It provides a history of the theoretical, algorithmic and software development efforts including operating system requirements and listings of some of the various special purpose subroutines which were developed and optimized for Lahey FORTRAN compilers on IBM PC-AT computers before the subroutines were integrated into the system software. Potential purchasers are encouraged to purchase and review the documentation before purchasing the AU-FREDI software. AU-FREDI is distributed in DEC VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. AU-FREDI was developed in 1989 and is a copyrighted work with all copyright vested in NASA.
Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N
2016-05-01
An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK/RK/AK values, indicating substantial anatomical variability of these discrepancies. In the HCP dataset, the median voxelwise percentage differences across the whole white matter skeleton were (nonlinear least squares algorithm) 14.5% (8.2%-23.1%) for MD, 4.3% (1.4%-17.3%) for FA, -5.2% (-48.7% to -0.8%) for MO, 12.5% (6.4%-21.2%) for RD, and 16.1% (9.9%-25.6%) for AD (all ranges computed as 0.01 and 0.99 quantiles). All differences/trends were consistent between the discovery (HCP) and replication (local) datasets and between estimation algorithms. However, the relationships between such trends, estimated diffusion tensor invariants, and kurtosis estimates were impacted by the choice of fitting routine. Model-dependent differences in the estimation of conventional indexes of MD/FA/MO/RD/AD can be well beyond commonly seen disease-related alterations. While estimating diffusion tensor-derived indexes using the DKI model may be advantageous in terms of mitigating b-value dependence of diffusivity estimates, such estimates should not be referred to as conventional DTI-derived indexes in order to avoid confusion in interpretation as well as multicenter comparisons. In order to assess the potential and advantages of DKI with respect to DTI as well as to standardize diffusion-weighted imaging methods between centers, both conventional DTI-derived indexes and diffusion tensor invariants derived by fitting the non-Gaussian DKI model should be separately estimated and analyzed using the same combination of fitting routines.
Invited review: Genetics and claw health: Opportunities to enhance claw health by genetic selection.
Heringstad, B; Egger-Danner, C; Charfeddine, N; Pryce, J E; Stock, K F; Kofler, J; Sogstad, A M; Holzhauer, M; Fiedler, A; Müller, K; Nielsen, P; Thomas, G; Gengler, N; de Jong, G; Ødegård, C; Malchiodi, F; Miglior, F; Alsaaod, M; Cole, J B
2018-06-01
Routine recording of claw health status at claw trimming of dairy cattle has been established in several countries, providing valuable data for genetic evaluation. In this review, we examine issues related to genetic evaluation of claw health; discuss data sources, trait definitions, and data validation procedures; and present a review of genetic parameters, possible indicator traits, and status of genetic and genomic evaluations for claw disorders. Different sources of data and traits can be used to describe claw health. Severe cases of claw disorders can be identified by veterinary diagnoses. Data from lameness and locomotion scoring, activity information from sensors, and feet and leg conformation traits are used as auxiliary traits. The most reliable and comprehensive information is data from regular hoof trimming. In genetic evaluation, claw disorders are usually defined as binary traits, based on whether or not the claw disorder was present (recorded) at least once during a defined time period. The traits can be specific disorders, composite traits, or overall claw health. Data validation and editing criteria are needed to ensure reliable data at the trimmer, herd, animal, and record levels. Different strategies have been chosen, reflecting differences in herd sizes, data structures, management practices, and recording systems among countries. Heritabilities of the most commonly analyzed claw disorders based on data from routine claw trimming were generally low, with ranges of linear model estimates from 0.01 to 0.14, and threshold model estimates from 0.06 to 0.39. Estimated genetic correlations among claw disorders varied from -0.40 to 0.98. The strongest genetic correlations were found among sole hemorrhage (SH), sole ulcer (SU), and white line disease (WL), and between digital/interdigital dermatitis (DD/ID) and heel horn erosion (HHE). Genetic correlations between DD/ID and HHE on the one hand and SH, SU, or WL on the other hand were, in most cases, low. Although some of the studies were based on relatively few records and the estimated genetic parameters had large standard errors, there was, with some exceptions, consistency among studies. Various studies evaluate the potential of various data soureces for use in breeding. The use of hoof trimming data is recommended for maximization of genetic gain, although auxiliary traits, such as locomotion score and some conformation traits, may be valuable for increasing the reliability of genetic evaluations. Routine genetic evaluation of direct claw health has been implemented in the Netherlands (2010); Denmark, Finland, and Sweden (joint Nordic evaluation; 2011); and Norway (2014), and other countries plan to implement evaluations in the near future. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.
Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk
2017-03-01
Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.
Wang, Ling-jia; Kissler, Hermann J; Wang, Xiaojun; Cochet, Olivia; Krzystyniak, Adam; Misawa, Ryosuke; Golab, Karolina; Tibudan, Martin; Grzanka, Jakub; Savari, Omid; Grose, Randall; Kaufman, Dixon B; Millis, Michael; Witkowski, Piotr
2015-01-01
Pancreatic islet mass, represented by islet equivalent (IEQ), is the most important parameter in decision making for clinical islet transplantation. To obtain IEQ, the sample of islets is routinely counted manually under a microscope and discarded thereafter. Islet purity, another parameter in islet processing, is routinely acquired by estimation only. In this study, we validated our digital image analysis (DIA) system developed using the software of Image Pro Plus for islet mass and purity assessment. Application of the DIA allows to better comply with current good manufacturing practice (cGMP) standards. Human islet samples were captured as calibrated digital images for the permanent record. Five trained technicians participated in determination of IEQ and purity by manual counting method and DIA. IEQ count showed statistically significant correlations between the manual method and DIA in all sample comparisons (r >0.819 and p < 0.0001). Statistically significant difference in IEQ between both methods was found only in High purity 100μL sample group (p = 0.029). As far as purity determination, statistically significant differences between manual assessment and DIA measurement was found in High and Low purity 100μL samples (p<0.005), In addition, islet particle number (IPN) and the IEQ/IPN ratio did not differ statistically between manual counting method and DIA. In conclusion, the DIA used in this study is a reliable technique in determination of IEQ and purity. Islet sample preserved as a digital image and results produced by DIA can be permanently stored for verification, technical training and islet information exchange between different islet centers. Therefore, DIA complies better with cGMP requirements than the manual counting method. We propose DIA as a quality control tool to supplement the established standard manual method for islets counting and purity estimation. PMID:24806436
Fottrell, Edward; Byass, Peter; Berhane, Yemane
2008-03-25
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
The statistical analysis of circadian phase and amplitude in constant-routine core-temperature data
NASA Technical Reports Server (NTRS)
Brown, E. N.; Czeisler, C. A.
1992-01-01
Accurate estimation of the phases and amplitude of the endogenous circadian pacemaker from constant-routine core-temperature series is crucial for making inferences about the properties of the human biological clock from data collected under this protocol. This paper presents a set of statistical methods based on a harmonic-regression-plus-correlated-noise model for estimating the phases and the amplitude of the endogenous circadian pacemaker from constant-routine core-temperature data. The methods include a Bayesian Monte Carlo procedure for computing the uncertainty in these circadian functions. We illustrate the techniques with a detailed study of a single subject's core-temperature series and describe their relationship to other statistical methods for circadian data analysis. In our laboratory, these methods have been successfully used to analyze more than 300 constant routines and provide a highly reliable means of extracting phase and amplitude information from core-temperature data.
Doyle, Jennifer L; Berry, Donagh P; Walsh, Siobhan W; Veerkamp, Roel F; Evans, Ross D; Carthy, Tara R
2018-05-04
Linear type traits describing the skeletal, muscular, and functional characteristics of an animal are routinely scored on live animals in both the dairy and beef cattle industries. Previous studies have demonstrated that genetic parameters for certain performance traits may differ between breeds; no study, however, has attempted to determine if differences exist in genetic parameters of linear type traits among breeds or sexes. Therefore, the objective of the present study was to determine if genetic covariance components for linear type traits differed among five contrasting cattle breeds, and to also investigate if these components differed by sex. A total of 18 linear type traits scored on 3,356 Angus (AA), 31,049 Charolais (CH), 3,004 Hereford (HE), 35,159 Limousin (LM), and 8,632 Simmental (SI) were used in the analysis. Data were analyzed using animal linear mixed models which included the fixed effects of sex of the animal (except in the investigation into the presence of sexual dimorphism), age at scoring, parity of the dam, and contemporary group of herd-date of scoring. Differences (P < 0.05) in heritability estimates, between at least two breeds, existed for 13 out of 18 linear type traits. Differences (P < 0.05) also existed between the pairwise within-breed genetic correlations among the linear type traits. Overall, the linear type traits in the continental breeds (i.e., CH, LM, SI) tended to have similar heritability estimates to each other as well as similar genetic correlations among the same pairwise traits, as did the traits in the British breeds (i.e., AA, HE). The correlation between a linear function of breeding values computed conditional on covariance parameters estimated from the CH breed with a linear function of breeding values computed conditional on covariance parameters estimated from the other breeds was estimated. Replacing the genetic covariance components estimated in the CH breed with those of the LM had least effect but the impact was considerable when the genetic covariance components of the AA were used. Genetic correlations between the same linear type traits in the two sexes were all close to unity (≥0.90) suggesting little advantage in considering these as separate traits for males and females. Results for the present study indicate the potential increase in accuracy of estimated breeding value prediction from considering, at least, the British breed traits separate to continental breed traits.
Micronuclei versus Chromosomal Aberrations Induced by X-Ray in Radiosensitive Mammalian Cells.
Plamadeala, Cristina; Wojcik, Andrzej; Creanga, Dorina
2015-03-01
An experimental study was accomplished to compare estimation methods of ionizing radiations genotoxicity in mammalian cell cultures by means of two cytogenetic parameters with focus on aberrant cells characterized by multiple chromosomal damages. In vitro study was carried out on the genotoxicity of low-medium doses of 190 kV X-rays absorbed in Chinese hamster ovary cell cultures. Micronuclei and ten types of chromosomal aberrations were identified with Giemsa dying and optical microscope screening. The first parameter consisting in micronuclei relative frequency has led to higher linear correlation coefficient than the second one consistent with chromosomal aberrations relative frequency. However, the latter parameter estimated as the sum of all chromosomal aberrations appeared to be more sensitive to radiation dose increasing in the studied dose range, from 0 to 3 Gy. The number of micronuclei occurring simultaneously in a single cell was not higher than 3, while the number of chromosomal aberrations observed in the same cell reached the value of 5 for doses over 1 Gy. Polynomial dose-response curves were evidenced for cells with Ni micronuclei (i=1,3) while non-monotonic curves were evidenced through detailed analysis of aberrant cells with Ni chromosomal changes [Formula: see text] - in concordance with in vitro studies from literature. The investigation could be important for public health issues where micronucleus screening is routinely applied but also for research purposes where various chromosomal aberrations could be of particular interest.
Micronuclei versus Chromosomal Aberrations Induced by X-Ray in Radiosensitive Mammalian Cells
PLAMADEALA, Cristina; WOJCIK, Andrzej; CREANGA, Dorina
2015-01-01
Background: An experimental study was accomplished to compare estimation methods of ionizing radiations genotoxicity in mammalian cell cultures by means of two cytogenetic parameters with focus on aberrant cells characterized by multiple chromosomal damages. Methods: In vitro study was carried out on the genotoxicity of low-medium doses of 190 kV X-rays absorbed in Chinese hamster ovary cell cultures. Micronuclei and ten types of chromosomal aberrations were identified with Giemsa dying and optical microscope screening. Results: The first parameter consisting in micronuclei relative frequency has led to higher linear correlation coefficient than the second one consistent with chromosomal aberrations relative frequency. However, the latter parameter estimated as the sum of all chromosomal aberrations appeared to be more sensitive to radiation dose increasing in the studied dose range, from 0 to 3 Gy. The number of micronuclei occurring simultaneously in a single cell was not higher than 3, while the number of chromosomal aberrations observed in the same cell reached the value of 5 for doses over 1 Gy. Conclusion: Polynomial dose-response curves were evidenced for cells with Ni micronuclei (i=1,3) while non-monotonic curves were evidenced through detailed analysis of aberrant cells with Ni chromosomal changes (i=(1,5)¯) - in concordance with in vitro studies from literature. The investigation could be important for public health issues where micronucleus screening is routinely applied but also for research purposes where various chromosomal aberrations could be of particular interest. PMID:25905075
Transoptr — A second order beam transport design code with optimization and constraints
NASA Astrophysics Data System (ADS)
Heighway, E. A.; Hutcheon, R. M.
1981-08-01
This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.
Geist, Barbara K; Baltzer, Pascal; Fueger, Barbara; Hamboeck, Martina; Nakuz, Thomas; Papp, Laszlo; Rasul, Sazan; Sundar, Lalith Kumar Shiyam; Hacker, Marcus; Staudenherz, Anton
2018-05-09
A method was developed to assess the kidney parameters glomerular filtration rate (GFR) and effective renal plasma flow (ERPF) from 2-deoxy-2-[ 18 F]fluoro-D-glucose (FDG) concentration behavior in kidneys, measured with positron emission tomography (PET) scans. Twenty-four healthy adult subjects prospectively underwent dynamic simultaneous PET/magnetic resonance imaging (MRI) examination. Time activity curves (TACs) were obtained from the dynamic PET series, with the guidance of MR information. Patlak analysis was performed to determine the GFR, and based on integrals, ERPF was calculated. Results were compared to intra-individually obtained reference values determined from venous blood samples. Total kidney GFR and ERPF as estimated by dynamic PET/MRI were highly correlated to their reference values (r = 0.88/p < 0.0001 and r = 0.82/p < 0.0001, respectively) with no significant difference between their means. The study is a proof of concept that GFR and ERPF can be assessed with dynamic FDG PET/MRI scans in healthy kidneys. This has advantages for patients getting a routine scan, where additional examinations for kidney function estimation could be avoided. Further studies are required for transferring this PET/MRI method to PET/CT applications.
Toffanin, V; Penasa, M; McParland, S; Berry, D P; Cassandro, M; De Marchi, M
2015-05-01
The aim of the present study was to estimate genetic parameters for calcium (Ca), phosphorus (P) and titratable acidity (TA) in bovine milk predicted by mid-IR spectroscopy (MIRS). Data consisted of 2458 Italian Holstein-Friesian cows sampled once in 220 farms. Information per sample on protein and fat percentage, pH and somatic cell count, as well as test-day milk yield, was also available. (Co)variance components were estimated using univariate and bivariate animal linear mixed models. Fixed effects considered in the analyses were herd of sampling, parity, lactation stage and a two-way interaction between parity and lactation stage; an additive genetic and residual term were included in the models as random effects. Estimates of heritability for Ca, P and TA were 0.10, 0.12 and 0.26, respectively. Positive moderate to strong phenotypic correlations (0.33 to 0.82) existed between Ca, P and TA, whereas phenotypic weak to moderate correlations (0.00 to 0.45) existed between these traits with both milk quality and yield. Moderate to strong genetic correlations (0.28 to 0.92) existed between Ca, P and TA, and between these predicted traits with both fat and protein percentage (0.35 to 0.91). The existence of heritable genetic variation for Ca, P and TA, coupled with the potential to predict these components for routine cow milk testing, imply that genetic gain in these traits is indeed possible.
NASA Astrophysics Data System (ADS)
Tobin, K. J.; Bennett, M. E.
2017-12-01
Over the last decade autocalibration routines have become commonplace in watershed modeling. This approach is most often used to simulate a streamflow at a basin's outlet. In alpine settings spring/early summer snowmelt is by far the dominant signal in this system. Therefore, there is great potential for a modeled watershed to underperform during other times of the year. This tendency has been noted in many prior studies. In this work, the Soil and Water Assessment Tool (SWAT) model was autocalibrated with the SUFI-2 routine. Two mountainous watersheds from Idaho and Utah were examined. In this study, the basins were calibrated on a monthly satellite based on the MODIS 16A2 product. The gridded MODIS product was ideally suited to derive an estimate of ET on a subbasin basis. Soil moisture data was derived from extrapolation of in situ sites from the SNOwpack TELemetry (SNOTEL) network. Previous work has indicated that in situ soil moisture can be applied to derive an estimate at a significant distance (>30 km) away from the in situ site. Optimized ET and soil moisture parameter values were then applied to streamflow simulations. Preliminary results indicate improved streamflow performance both during calibration (2005-2011) and validation (2012-2014) periods. Streamflow performance was monitored with not only standard objective metrics (bias and Nash Sutcliffe coefficients) but also improved baseflow accuracy, demonstrating the utility of this approach in improving watershed modeling fidelity outside the main snowmelt season.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
NASA Astrophysics Data System (ADS)
Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.
2008-12-01
Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Classical Molecular Dynamics with Mobile Protons.
Lazaridis, Themis; Hummer, Gerhard
2017-11-27
An important limitation of standard classical molecular dynamics simulations is the inability to make or break chemical bonds. This restricts severely our ability to study processes that involve even the simplest of chemical reactions, the transfer of a proton. Existing approaches for allowing proton transfer in the context of classical mechanics are rather cumbersome and have not achieved widespread use and routine status. Here we reconsider the combination of molecular dynamics with periodic stochastic proton hops. To ensure computational efficiency, we propose a non-Boltzmann acceptance criterion that is heuristically adjusted to maintain the correct or desirable thermodynamic equilibria between different protonation states and proton transfer rates. Parameters are proposed for hydronium, Asp, Glu, and His. The algorithm is implemented in the program CHARMM and tested on proton diffusion in bulk water and carbon nanotubes and on proton conductance in the gramicidin A channel. Using hopping parameters determined from proton diffusion in bulk water, the model reproduces the enhanced proton diffusivity in carbon nanotubes and gives a reasonable estimate of the proton conductance in gramicidin A.
Regional estimation of response routine parameters
NASA Astrophysics Data System (ADS)
Tøfte, Lena S.
2015-04-01
Reducing the number of calibration parameters is of a considerable advantage when area distributed hydrological models are to be calibrated, both due to equifinality and over-parameterization of the model in general, and for making the calibration process more efficient. A simple non-threshold response model for drainage in natural catchments based on among others Kirchner's article in WRR 2009 is implemented in the gridded hydrological model in the ENKI framework. This response model takes only the hydrogram into account; it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. In former analyses of natural discharge series from a large number of catchments in different regions of Norway, we found that these response model parameters can be calculated from some known catchment characteristics, as catchment area and lake percentage, found in maps or data bases, meaning that the parameters can easily be found also for ungauged catchments. In the presented work from the EU project COMPLEX a large region in Mid-Norway containing 27 simulated catchments of different sizes and characteristics is calibrated. Results from two different calibration strategies are compared: 1) removing the response parameters from the calibration by calculating them in advance, based on the results from our former studies, and 2) including the response parameters in the calibration, both as maps with different values for each catchment, and as a constant number for the total region. The resulting simulation performances are compared and discussed.
ENKI - An Open Source environmental modelling platfom
NASA Astrophysics Data System (ADS)
Kolberg, S.; Bruland, O.
2012-04-01
The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface, making it possible to run ENKI from GIS programs and other software environments. ENKI currently compiles under Windows and Visual Studio only, but ambitions exist to remove the platform and compiler dependencies.
NASA Astrophysics Data System (ADS)
Cook, Grant O.; Sorensen, Carl D.
2013-12-01
Partial transient liquid-phase (PTLP) bonding is currently an esoteric joining process with limited applications. However, it has preferable advantages compared with typical joining techniques and is the best joining technique for certain applications. Specifically, it can bond hard-to-join materials as well as dissimilar material types, and bonding is performed at comparatively low temperatures. Part of the difficulty in applying PTLP bonding is finding suitable interlayer combinations (ICs). A novel interlayer selection procedure has been developed to facilitate the identification of ICs that will create successful PTLP bonds and is explained in a companion article. An integral part of the selection procedure is a filtering routine that identifies all possible ICs for a given application. This routine utilizes a set of customizable parameters that are based on key characteristics of PTLP bonding. These parameters include important design considerations such as bonding temperature, target remelting temperature, bond solid type, and interlayer thicknesses. The output from this routine provides a detailed view of each candidate IC along with a broad view of the entire candidate set, greatly facilitating the selection of ideal ICs. This routine provides a new perspective on the PTLP bonding process. In addition, the use of this routine, by way of the accompanying selection procedure, will expand PTLP bonding as a viable joining process.
ASSESSMENT OF INTAKE ACCORDING TO IDEAS GUIDANCE: CASE STUDY.
Bitar, A; Maghrabi, M
2018-04-01
Estimation of radiation intake and internal dose can be carried out through direct or indirect measurements during routine or special monitoring program. In case of Iodine-131 contamination, direct measurements, such as thyroid counting, are fast and efficient to give quick results. Generally, the calculation method implements suitable values for known parameters whereas default values are used if no information is available. However, in view to avoid significant discrepancies, IDEAS guidelines put in route a comprehensive method to evaluate the monitoring data for one and different types of monitoring. This article deals with a case of internal contamination of a worker who inhaled aerosols containing 131I during the production of radiopharmaceuticals. The interpretation of data obtained was done by following IDEAS guidelines.
Evaluation of the Emergency Response Dose Assessment System(ERDAS)
NASA Technical Reports Server (NTRS)
Evans, Randolph J.; Lambert, Winifred C.; Manobianco, John T.; Taylor, Gregory E.; Wheeler, Mark M.; Yersavich, Ann M.
1996-01-01
The emergency response dose assessment system (ERDAS) is a protype software and hardware system configured to produce routine mesoscale meteorological forecasts and enhanced dispersion estimates on an operational basis for the Kennedy Space Center (KSC)/Cape Canaveral Air Station (CCAS) region. ERDAS provides emergency response guidance to operations at KSC/CCAS in the case of an accidental hazardous material release or an aborted vehicle launch. This report describes the evaluation of ERDAS including: evaluation of sea breeze predictions, comparison of launch plume location and concentration predictions, case study of a toxic release, evaluation of model sensitivity to varying input parameters, evaluation of the user interface, assessment of ERDA's operational capabilities, and a comparison of ERDAS models to the ocean breeze dry gultch diffusion model.
Indonesian dengue burden estimates: review of evidence by an expert panel.
Wahyono, T Y M; Nealon, J; Beucher, S; Prayitno, A; Moureau, A; Nawawi, S; Thabrany, H; Nadjib, M
2017-08-01
Routine, passive surveillance systems tend to underestimate the burden of communicable diseases such as dengue. When empirical methods are unavailable, complimentary opinion-based or extrapolative methods have been employed. Here, an expert Delphi panel estimated the proportion of dengue captured by the Indonesian surveillance system, and associated health system parameters. Following presentation of medical and epidemiological data and subsequent discussions, the panel made iterative estimates from which expansion factors (EF), the ratio of total:reported cases, were calculated. Panelists estimated that of all symptomatic Indonesian dengue episodes, 57·8% (95% confidence interval (CI) 46·6-59·8) enter healthcare facilities to seek treatment; 39·3% (95% CI 32·8-42·0) are diagnosed as dengue; and 20·3% (95% CI 16·1-24·3) are subsequently reported in the surveillance system. They estimated most hospitalizations occur in the public sector, while ~55% of ambulatory episodes are seen privately. These estimates gave an overall EF of 5·00; hospitalized EF of 1·66; and ambulatory EF of 34·01 which, when combined with passive surveillance data, equates to an annual average (2006-2015) of 612 005 dengue cases, and 183 297 hospitalizations. These estimates are lower than those published elsewhere, perhaps due to case definitions, local clinical perceptions and treatment-seeking behavior. These findings complement global burden estimates, support health economic analyses, and can be used to inform decision-making.
NASA Astrophysics Data System (ADS)
Tumanov, Sergiu
A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.
The physical and biological basis of quantitative parameters derived from diffusion MRI
2012-01-01
Diffusion magnetic resonance imaging is a quantitative imaging technique that measures the underlying molecular diffusion of protons. Diffusion-weighted imaging (DWI) quantifies the apparent diffusion coefficient (ADC) which was first used to detect early ischemic stroke. However this does not take account of the directional dependence of diffusion seen in biological systems (anisotropy). Diffusion tensor imaging (DTI) provides a mathematical model of diffusion anisotropy and is widely used. Parameters, including fractional anisotropy (FA), mean diffusivity (MD), parallel and perpendicular diffusivity can be derived to provide sensitive, but non-specific, measures of altered tissue structure. They are typically assessed in clinical studies by voxel-based or region-of-interest based analyses. The increasing recognition of the limitations of the diffusion tensor model has led to more complex multi-compartment models such as CHARMED, AxCaliber or NODDI being developed to estimate microstructural parameters including axonal diameter, axonal density and fiber orientations. However these are not yet in routine clinical use due to lengthy acquisition times. In this review, I discuss how molecular diffusion may be measured using diffusion MRI, the biological and physical bases for the parameters derived from DWI and DTI, how these are used in clinical studies and the prospect of more complex tissue models providing helpful micro-structural information. PMID:23289085
Néant, Nadège; Gattacceca, Florence; Lê, Minh Patrick; Yazdanpanah, Yazdan; Dhiver, Catherine; Bregigeon, Sylvie; Mokhtari, Saadia; Peytavin, Gilles; Tamalet, Catherine; Descamps, Diane; Lacarelle, Bruno; Solas, Caroline
2018-04-01
Rilpivirine, prescribed for the treatment of HIV infection, presents an important inter-individual pharmacokinetic variability. We aimed to determine population pharmacokinetic parameters of rilpivirine in adult HIV-infected patients and quantify their inter-individual variability. We conducted a multicenter, retrospective, and observational study in patients treated with the once-daily rilpivirine/tenofovir disoproxil fumarate/emtricitabine regimen. As part of routine therapeutic drug monitoring, rilpivirine concentrations were measured by UPLC-MS/MS. Population pharmacokinetic analysis was performed using NONMEM software. Once the compartmental and random effects models were selected, covariates were tested to explain the inter-individual variability in pharmacokinetic parameters. The final model qualification was performed by both statistical and graphical methods. We included 379 patients, resulting in the analysis of 779 rilpivirine plasma concentrations. Of the observed trough individual plasma concentrations, 24.4% were below the 50 ng/ml minimal effective concentration. A one-compartment model with first-order absorption best described the data. The estimated fixed effect for plasma apparent clearance and distribution volume were 9 L/h and 321 L, respectively, resulting in a half-life of 25.2 h. The common inter-individual variability for both parameters was 34.1% at both the first and the second occasions. The inter-individual variability of clearance was 30.3%. Our results showed a terminal half-life lower than reported and a high proportion of patients with suboptimal rilpivirine concentrations, which highlights the interest of using therapeutic drug monitoring in clinical practice. The population analysis performed with data from "real-life" conditions resulted in reliable post hoc estimates of pharmacokinetic parameters, suitable for individualization of dosing regimen.
A Method for Improving Hotspot Directional Signatures in BRDF Models Used for MODIS
NASA Technical Reports Server (NTRS)
Jiao, Ziti; Schaaf, Crystal B.; Dong, Yadong; Roman, Miguel; Hill, Michael J.; Chen, Jing M.; Wang, Zhuosen; Zhang, Hu; Saenz, Edward; Poudyal, Rajesh;
2016-01-01
The semi-empirical, kernel-driven, linear RossThick-LiSparseReciprocal (RTLSR) Bidirectional Reflectance Distribution Function (BRDF) model is used to generate the routine MODIS BRDFAlbedo product due to its global applicability and the underlying physics. A challenge of this model in regard to surface reflectance anisotropy effects comes from its underestimation of the directional reflectance signatures near the Sun illumination direction; also known as the hotspot effect. In this study, a method has been developed for improving the ability of the RTLSR model to simulate the magnitude and width of the hotspot effect. The method corrects the volumetric scattering component of the RTLSR model using an exponential approximation of a physical hotspot kernel, which recreates the hotspot magnitude and width using two free parameters (C(sub 1) and C(sub 2), respectively). The approach allows one to reconstruct, with reasonable accuracy, the hotspot effect by adjusting or using the prior values of these two hotspot variables. Our results demonstrate that: (1) significant improvements in capturing hotspot effect can be made to this method by using the inverted hotspot parameters; (2) the reciprocal nature allow this method to be more adaptive for simulating the hotspot height and width with high accuracy, especially in cases where hotspot signatures are available; and (3) while the new approach is consistent with the heritage RTLSR model inversion used to estimate intrinsic narrowband and broadband albedos, it presents some differences for vegetation clumping index (CI) retrievals. With the hotspot-related model parameters determined a priori, this method offers improved performance for various ecological remote sensing applications; including the estimation of canopy structure parameters.
Reference dosimetry study for 3 MEV electron beam accelerator in malaysia
NASA Astrophysics Data System (ADS)
Ali, Noriah Mod; Sunaga, Hiromi; Tanaka, Ryuichi
1995-09-01
An effective quality assurance programme is initiated for the use of the electron beam with energies up to 3 MeV. The key element of the programme is the establishment of a relationship between the standardised beam to the routine technique which is employed to verify the beam parameter. A total absorbing calorimeter was adopted as a suitable reference system and when used in combination with the electron current densitymeter (ECD) will enable to determine the mean energy for electron with energies between 1 to 3 MeV. An appropriate method of transfering the standard parameter is studied and the work that is expected to optimise the accuracy attainable with routine check-up of the irradiation parameter are presented.
Excel-Based Tool for Pharmacokinetically Guided Dose Adjustment of Paclitaxel.
Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich
2015-12-01
Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of <12%. The median relative deviation of the estimated Tc > 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in clinical routine.
Sanchez, M P; Ferrand, M; Gelé, M; Pourchet, D; Miranda, G; Martin, P; Brochard, M; Boichard, D
2017-08-01
Genetic parameters for the major milk proteins were estimated in the 3 main French dairy cattle breeds (i.e. Montbéliarde, Normande, and Holstein) as part of the PhénoFinlait program. The 6 major milk protein contents as well as the total protein content (PC) were estimated from mid-infrared spectrometry on 133,592 test-day milk samples from 20,434 cows in first lactation. Lactation means, expressed as a percentage of milk (protein contents) or of protein (protein fractions), were analyzed with an animal mixed model including fixed environmental effects (herd, year × month of calving, and spectrometer) and a random genetic effect. Genetic parameter estimates were very consistent across breeds. Heritability estimates (h 2 ) were generally higher for protein fractions than for protein contents. They were moderate to high for α S1 -casein, α S2 -casein, β-casein, κ-casein, and α-lactalbumin (0.25 < h 2 < 0.72). In each breed, β-lactoglobulin was the most heritable trait (0.61 < h 2 < 0.86). Genetic correlations (r g ) varied depending on how the percentage was expressed. The PC was strongly positively correlated with protein contents but almost genetically independent from protein fractions. Protein fractions were generally in opposition, except between κ-casein and α-lactalbumin (0.39 < r g < 0.46) and κ-casein and α S2 -casein (0.36 < r g < 0.49). Between protein contents, r g estimates were positive, with highest values found between caseins (0.83 < r g < 0.98). In the 3 breeds, β-lactoglobulin was negatively correlated with caseins (-0.75 < r g < -0.08), in particular with κ-casein (-0.75 < r g < -0.55). These results, obtained from a large panel of cows of the 3 main French dairy cattle breeds, show that routinely collected mid-infrared spectra could be used to modify milk protein composition by selection. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Schwenke, Michael; Strehlow, Jan; Demedts, Daniel; Haase, Sabrina; Barrios Romero, Diego; Rothlübbers, Sven; von Dresky, Caroline; Zidowitz, Stephan; Georgii, Joachim; Mihcin, Senay; Bezzi, Mario; Tanner, Christine; Sat, Giora; Levy, Yoav; Jenne, Jürgen; Günther, Matthias; Melzer, Andreas; Preusser, Tobias
2017-01-01
Focused ultrasound (FUS) is entering clinical routine as a treatment option. Currently, no clinically available FUS treatment system features automated respiratory motion compensation. The required quality standards make developing such a system challenging. A novel FUS treatment system with motion compensation is described, developed with the goal of clinical use. The system comprises a clinically available MR device and FUS transducer system. The controller is very generic and could use any suitable MR or FUS device. MR image sequences (echo planar imaging) are acquired for both motion observation and thermometry. Based on anatomical feature tracking, motion predictions are estimated to compensate for processing delays. FUS control parameters are computed repeatedly and sent to the hardware to steer the focus to the (estimated) target position. All involved calculations produce individually known errors, yet their impact on therapy outcome is unclear. This is solved by defining an intuitive quality measure that compares the achieved temperature to the static scenario, resulting in an overall efficiency with respect to temperature rise. To allow for extensive testing of the system over wide ranges of parameters and algorithmic choices, we replace the actual MR and FUS devices by a virtual system. It emulates the hardware and, using numerical simulations of FUS during motion, predicts the local temperature rise in the tissue resulting from the controls it receives. With a clinically available monitoring image rate of 6.67 Hz and 20 FUS control updates per second, normal respiratory motion is estimated to be compensable with an estimated efficiency of 80%. This reduces to about 70% for motion scaled by 1.5. Extensive testing (6347 simulated sonications) over wide ranges of parameters shows that the main source of error is the temporal motion prediction. A history-based motion prediction method performs better than a simple linear extrapolator. The estimated efficiency of the new treatment system is already suited for clinical applications. The simulation-based in-silico testing as a first-stage validation reduces the efforts of real-world testing. Due to the extensible modular design, the described approach might lead to faster translations from research to clinical practice.
CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES
NASA Technical Reports Server (NTRS)
Szatmary, S. A.
1994-01-01
The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided when the maximum likelihood technique is used. CARES/PC is written and compiled with the Microsoft FORTRAN v5.0 compiler using the VAX FORTRAN extensions and dynamic array allocation supported by this compiler for the IBM/MS-DOS or OS/2 operating systems. The dynamic array allocation routines allow the user to match the number of fracture sets and test specimens to the memory available. Machine requirements include IBM PC compatibles with optional math coprocessor. Program output is designed to fit 80-column format printers. Executables for both DOS and OS/2 are provided. CARES/PC is distributed on one 5.25 inch 360K MS-DOS format diskette in compressed format. The expansion tool PKUNZIP.EXE is supplied on the diskette. CARES/PC was developed in 1990. IBM PC and OS/2 are trademarks of International Business Machines. MS-DOS and MS OS/2 are trademarks of Microsoft Corporation. VAX is a trademark of Digital Equipment Corporation.
Regional Earthquake Shaking and Loss Estimation
NASA Astrophysics Data System (ADS)
Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.
2009-04-01
This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.
Creating Masterpieces: How Course Structures and Routines Enable Student Performance
ERIC Educational Resources Information Center
Dean, Kathy Lund; Fornaciari, Charles J.
2014-01-01
Over a five-year period, we made a persistent observation: Course structures and routines, such as assignment parameters, student group process rules, and grading schemes were being consistently ignored. As a result, we got distracted by correcting these structural issues and were spending less time on student assignment performance. In this…
Technical Manual for the Geospatial Stream Flow Model (GeoSFM)
Asante, Kwabena O.; Artan, Guleid A.; Pervez, Md Shahriar; Bandaragoda, Christina; Verdin, James P.
2008-01-01
The monitoring of wide-area hydrologic events requires the use of geospatial and time series data available in near-real time. These data sets must be manipulated into information products that speak to the location and magnitude of the event. Scientists at the U.S. Geological Survey Earth Resources Observation and Science (USGS EROS) Center have implemented a hydrologic modeling system which consists of an operational data processing system and the Geospatial Stream Flow Model (GeoSFM). The data processing system generates daily forcing evapotranspiration and precipitation data from various remotely sensed and ground-based data sources. To allow for rapid implementation in data scarce environments, widely available terrain, soil, and land cover data sets are used for model setup and initial parameter estimation. GeoSFM performs geospatial preprocessing and postprocessing tasks as well as hydrologic modeling tasks within an ArcView GIS environment. The integration of GIS routines and time series processing routines is achieved seamlessly through the use of dynamically linked libraries (DLLs) embedded within Avenue scripts. GeoSFM is run operationally to identify and map wide-area streamflow anomalies. Daily model results including daily streamflow and soil water maps are disseminated through Internet map servers, flood hazard bulletins and other media.
Klick, B; Nishiura, H; Leung, G M; Cowling, B J
2014-04-01
Both case-ascertained household studies, in which households are recruited after an 'index case' is identified, and household cohort studies, where a household is enrolled before the start of the epidemic, may be used to test and estimate the protective effect of interventions used to prevent influenza transmission. A simulation approach parameterized with empirical data from household studies was used to evaluate and compare the statistical power of four study designs: a cohort study with routine virological testing of household contacts of infected index case, a cohort study where only household contacts with acute respiratory illness (ARI) are sampled for virological testing, a case-ascertained study with routine virological testing of household contacts, and a case-ascertained study where only household contacts with ARI are sampled for virological testing. We found that a case-ascertained study with ARI-triggered testing would be the most powerful design while a cohort design only testing household contacts with ARI was the least powerful. Sensitivity analysis demonstrated that these conclusions varied by model parameters including the serial interval and the risk of influenza virus infection from outside the household.
Larsen, V H; Waldau, T; Gravesen, H; Siggaard-Andersen, O
1996-01-01
To describe a clinical case where an extremely low erythrocyte 2,3-diphosphoglycerate concentration (2,3-DPG) was discovered by routine blood gas analysis supplemented by computer calculation of derived quantities. The finding of a low 2,3-DPG revealed a severe hypophosphatemia. Open uncontrolled study of a patient case. Intensive care observation during 41 days. A 44 year old woman with an abdominal abscess. Surgical drainage, antibiotics and parenteral nutrition. daily routine blood gas analyses with computer calculation of the hemoglobin oxygen affinity and estimation of the 2,3-DPG. An abrupt decline of 2,3-DPG was observed late in the course coincident with a pronounced hypophosphatemia. The fall in 2,3-DPG was verified by enzymatic analysis. 2,3-DPG may be estimated by computer calculation of routine blood gas data. A low 2,3-DPG which may be associated with hypophosphatemia causes an unfavorable increase in hemoglobin oxygen affinity which reduces the oxygen release to the tissues.
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail; Sinitsyn, Alexey
2017-04-01
Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation directly. The accuracy of this approach varies depending on algorithm choice. Deep neural networks demonstrated the best accuracy of more than 96%. We will demonstrate some approaches and the most influential statistical features of all-sky images that lets the algorithm reach that high accuracy. With the use of our new optical package a set of over 480`000 samples has been collected in several sea missions in 2014-2016 along with concurrent standard human observed and instrumentally recorded meteorological parameters. We will demonstrate the results of the field measurements and will discuss some still remaining problems and the potential of the further developments of machine learning approach.
Fernandez-Prado, Raul; Castillo-Rodriguez, Esmeralda; Velez-Arribas, Fernando Javier; Gracia-Iguacel, Carolina; Ortiz, Alberto
2016-12-01
Direct oral anticoagulants (DOACs) may require dose reduction or avoidance when glomerular filtration rate is low. However, glomerular filtration rate is not usually measured in routine clinical practice. Rather, equations that incorporate different variables use serum creatinine to estimate either creatinine clearance in mL/min or glomerular filtration rate in mL/min/1.73 m 2 . The Cockcroft-Gault equation estimates creatinine clearance and incorporates weight into the equation. By contrast, the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations estimate glomerular filtration rate and incorporate ethnicity but not weight. As a result, an individual patient may have very different renal function estimates, depending on the equation used. We now highlight these differences and discuss the impact on routine clinical care for anticoagulation to prevent embolization in atrial fibrillation. Pivotal DOAC clinical trials used creatinine clearance as a criterion for patient enrollment, and dose adjustment and Federal Drug Administration recommendations are based on creatinine clearance. However, clinical biochemistry laboratories provide CKD-EPI glomerular filtration rate estimations, resulting in discrepancies between clinical trial and routine use of the drugs. Copyright © 2016 Elsevier Inc. All rights reserved.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
In vitro chronic hepatic disease characterization with a multiparametric ultrasonic approach.
Meziri, M; Pereira, W C A; Abdelwahab, A; Degott, C; Laugier, P
2005-03-01
Although, high resolution, real-time ultrasonic (US) imaging is routinely available, image interpretation is based on grey-level and texture and quantitative evaluation is limited. Other potentially useful diagnostic information from US echoes may include modifications in tissue acoustic parameters (speed, attenuation and backscattering) resulting from disease development. Changes in acoustical parameters can be detected using time-of-flight and spectral analysis techniques. The objective of this study is to explore the potential of three parameters together (attenuation coefficient, US speed and integrated backscatter coefficient-IBC) to discriminate healthy and fibrosis subgroups in liver tissue. Echoes from 21 fresh in vitro samples of human liver and from a plane reflector were obtained using a 20-MHz central frequency transducer (6-30 MHz bandpass). The scan plane was parallel to the reflector placed beneath the liver. A 30 x 20 matrix of A-scans was obtained, with a 200-microm step. The samples were classified according to the Metavir scale in five different degrees of fibrosis. US speed, attenuation and IBC were estimated from standard methods described in the literature. Statistical tests were applied to the results of each parameter individually and indicated that it was not possible to identify all the fibrosis groups. Then a discriminant analysis was performed for the three parameters together resulting in a reasonable separation of fibrotic groups. Although the number of tissue samples is limited, this study opens the possibility of enhancing the discriminant capability of ultrasonic parameters of liver tissue disease when they are combined together.
The costs of future polio risk management policies.
Tebbens, Radboud J Duintjer; Sangrujee, Nalinee; Thompson, Kimberly M
2006-12-01
Decisionmakers need information about the anticipated future costs of maintaining polio eradication as a function of the policy options under consideration. Given the large portfolio of options, we reviewed and synthesized the existing cost data relevant to current policies to provide context for future policies. We model the expected future costs of different strategies for continued vaccination, surveillance, and other costs that require significant potential resource commitments. We estimate the costs of different potential policy portfolios for low-, middle-, and high-income countries to demonstrate the variability in these costs. We estimate that a global transition from routine immunization with oral poliovirus vaccine (OPV) to inactivated poliovirus vaccine (IPV) would increase the costs of managing polio globally, although routine IPV use remains less costly than routine OPV use with supplemental immunization activities. The costs of surveillance and a stockpile, while small compared to routine vaccination costs, represent important expenditures to ensure adequate response to potential outbreaks. The uncertainty and sensitivity analyses highlight important uncertainty in the aggregated costs and demonstrates that the discount rate and uncertainty in price and administration cost of IPV drives the expected incremental cost of routine IPV vs. OPV immunization.
ZWD time series analysis derived from NRT data processing. A regional study of PW in Greece.
NASA Astrophysics Data System (ADS)
Pikridas, Christos; Balidakis, Kyriakos; Katsougiannopoulos, Symeon
2015-04-01
ZWD (Zenith Wet/non-hydrostatic Delay) estimates are routinely derived Near Real Time from the new established Analysis Center in the Department of Geodesy and Surveying of Aristotle University of Thessaloniki (DGS/AUT-AC), in the framework of E-GVAP (EUMETNET GNSS water vapour project) since October 2014. This process takes place on an hourly basis and yields, among else, station coordinates and tropospheric parameter estimates for a network of 90+ permanent GNSS (Global Navigation Satellite System) stations. These are distributed at the wider part of Hellenic region. In this study, temporal and spatial variability of ZWD estimates were examined, as well as their relation with coordinate series extracted from both float and fixed solution of the initial phase ambiguities. For this investigation, Bernese GNSS Software v5.2 was used for the acquisition of the 6 month dataset from the aforementioned network. For time series analysis we employed techniques such as the Generalized Lomb-Scargle periodogram and Burg's maximum entropy method due to inefficiencies of the Discrete Fourier Transform application in the test dataset. Through the analysis, interesting results for further geophysical interpretation were drawn. In addition, the spatial and temporal distributions of Precipitable Water vapour (PW) obtained from both ZWD estimates and ERA-Interim reanalysis grids were investigated.
Cost-Effectiveness of Routine Screening for Critical Congenital Heart Disease in US Newborns
Peterson, Cora; Grosse, Scott D.; Oster, Matthew E.; Olney, Richard S.; Cassell, Cynthia H.
2015-01-01
OBJECTIVES Clinical evidence indicates newborn critical congenital heart disease (CCHD) screening through pulse oximetry is lifesaving. In 2011, CCHD was added to the US Recommended Uniform Screening Panel for newborns. Several states have implemented or are considering screening mandates. This study aimed to estimate the cost-effectiveness of routine screening among US newborns unsuspected of having CCHD. METHODS We developed a cohort model with a time horizon of infancy to estimate the inpatient medical costs and health benefits of CCHD screening. Model inputs were derived from new estimates of hospital screening costs and inpatient care for infants with late-detected CCHD, defined as no diagnosis at the birth hospital. We estimated the number of newborns with CCHD detected at birth hospitals and life-years saved with routine screening compared with no screening. RESULTS Screening was estimated to incur an additional cost of $6.28 per newborn, with incremental costs of $20 862 per newborn with CCHD detected at birth hospitals and $40 385 per life-year gained (2011 US dollars). We estimated 1189 more newborns with CCHD would be identified at birth hospitals and 20 infant deaths averted annually with screening. Another 1975 false-positive results not associated with CCHD were estimated to occur, although these results had a minimal impact on total estimated costs. CONCLUSIONS This study provides the first US cost-effectiveness analysis of CCHD screening in the United States could be reasonably cost-effective. We anticipate data from states that have recently approved or initiated CCHD screening will become available over the next few years to refine these projections. PMID:23918890
Comparison of two methods for calculating the P sorption capacity parameter in soils
USDA-ARS?s Scientific Manuscript database
Phosphorus (P) cycling in soils is an important process affecting P movement through the landscape. The P cycling routines in many computer models are based on the relationships developed for the EPIC model. An important parameter required for this model is the P sorption capacity parameter (PSP). I...
Urinary lithogenesis risk tests: comparison of a commercial kit and a laboratory prototype test.
Grases, Félix; Costa-Bauzá, Antonia; Prieto, Rafel M; Arrabal, Miguel; De Haro, Tomás; Lancina, Juan A; Barbuzano, Carmen; Colom, Sergi; Riera, Joaquín; Perelló, Joan; Isern, Bernat; Sanchis, Pilar; Conte, Antonio; Barragan, Fernando; Gomila, Isabel
2011-11-01
Renal stone formation is a multifactorial process depending in part on urine composition. Other parameters relate to structural or pathological features of the kidney. To date, routine laboratory estimation of urolithiasis risk has been based on determination of urinary composition. This process requires collection of at least two 24 h urine samples, which is tedious for patients. The most important feature of urinary lithogenic risk is the balance between various urinary parameters, although unknown factors may be involved. The objective of this study was to compare data obtained using a commercial kit with those of a laboratory prototype, using a multicentre approach, to validate the utility of these methods in routine clinical practice. A simple new commercial test (NefroPlus®; Sarstedt AG & Co., Nümbrecht, Germany) evaluating the capacity of urine to crystallize calcium salts, and thus permitting detection of patients at risk for stone development, was compared with a prototype test previously described by this group. Urine of 64 volunteers produced during the night was used in these comparisons. The commercial test was also used to evaluate urine samples of 83 subjects in one of three hospitals. Both methods were essentially in complete agreement (98%) with respect to test results. The multicentre data were: sensitivity 94.7%; specificity 76.9%; positive predictive value (lithogenic urine) 90.0%; negative predictive value (non-lithogenic urine) 87.0%; test efficacy 89.2%. The new commercial NefroPlus test offers fast and cheap evaluation of the overall risk of development of urinary calcium-containing calculi.
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
Estimating ice-affected streamflow by extended Kalman filtering
Holtschlag, D.J.; Grewal, M.S.
1998-01-01
An extended Kalman filter was developed to automate the real-time estimation of ice-affected streamflow on the basis of routine measurements of stream stage and air temperature and on the relation between stage and streamflow during open-water (ice-free) conditions. The filter accommodates three dynamic modes of ice effects: sudden formation/ablation, stable ice conditions, and eventual elimination. The utility of the filter was evaluated by applying it to historical data from two long-term streamflow-gauging stations, St. John River at Dickey, Maine and Platte River at North Bend, Nebr. Results indicate that the filter was stable and that parameters converged for both stations, producing streamflow estimates that are highly correlated with published values. For the Maine station, logarithms of estimated streamflows are within 8% of the logarithms of published values 87.2% of the time during periods of ice effects and within 15% 96.6% of the time. Similarly, for the Nebraska station, logarithms of estimated streamflows are within 8% of the logarithms of published values 90.7% of the time and within 15% 97.7% of the time. In addition, the correlation between temporal updates and published streamflows on days of direct measurements at the Maine station was 0.777 and 0.998 for ice-affected and open-water periods, respectively; for the Nebraska station, corresponding correlations were 0.864 and 0.997.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Koen, Joshua D; Barrett, Frederick S; Harlow, Iain M; Yonelinas, Andrew P
2017-08-01
Signal-detection theory, and the analysis of receiver-operating characteristics (ROCs), has played a critical role in the development of theories of episodic memory and perception. The purpose of the current paper is to present the ROC Toolbox. This toolbox is a set of functions written in the Matlab programming language that can be used to fit various common signal detection models to ROC data obtained from confidence rating experiments. The goals for developing the ROC Toolbox were to create a tool (1) that is easy to use and easy for researchers to implement with their own data, (2) that can flexibly define models based on varying study parameters, such as the number of response options (e.g., confidence ratings) and experimental conditions, and (3) that provides optimal routines (e.g., Maximum Likelihood estimation) to obtain parameter estimates and numerous goodness-of-fit measures.The ROC toolbox allows for various different confidence scales and currently includes the models commonly used in recognition memory and perception: (1) the unequal variance signal detection (UVSD) model, (2) the dual process signal detection (DPSD) model, and (3) the mixture signal detection (MSD) model. For each model fit to a given data set the ROC toolbox plots summary information about the best fitting model parameters and various goodness-of-fit measures. Here, we present an overview of the ROC Toolbox, illustrate how it can be used to input and analyse real data, and finish with a brief discussion on features that can be added to the toolbox.
Koeck, A; Jamrozik, J; Schenkel, F S; Moore, R K; Lefebvre, D M; Kelton, D F; Miglior, F
2014-11-01
The aim of this study was to estimate genetic parameters for milk β-hydroxybutyrate (BHBA) in early first lactation of Canadian Holstein cows and to examine its genetic association with indicators of energy balance (fat-to-protein ratio and body condition score) and metabolic diseases (clinical ketosis and displaced abomasum). Data for milk BHBA recorded between 5 and 100 d in milk was obtained from Valacta (Sainte-Anne-de-Bellevue, Québec, Canada), the Canadian Dairy Herd Improvement organization responsible for Québec and Atlantic provinces. Test-day milk samples were analyzed by mid-infrared spectrometry using previously developed calibration equations for milk BHBA. Test-day records of fat-to-protein ratio were obtained from the routine milk recording scheme. Body condition score records were available from the routine type classification system. Data on clinical ketosis and displaced abomasum recorded by producers were available from the national dairy cattle health system in Canada. Data were analyzed using linear animal models. Heritability estimates for milk BHBA at different stages of early lactation were between 0.14 and 0.29. Genetic correlations between milk BHBA were higher between adjacent lactation intervals and decreased as intervals were further apart. Correlations between breeding values for milk BHBA and routinely evaluated traits revealed that selection for lower milk BHBA in early lactation would lead to an improvement of several health and fertility traits, including SCS, calving to first service, number of services, first service to conception, and days open. Also, lower milk BHBA was associated with a longer herd life, better conformation, and better feet and legs. A higher genetic merit for milk yield was associated with higher milk BHBA, and, therefore, a greater susceptibility to hyperketonemia. Milk BHBA at the first test-day was moderately genetically correlated with fat-to-protein ratio (0.49), body condition score (-0.35), and clinical ketosis (0.48), whereas the genetic correlation with displaced abomasum was near zero (0.07). Milk BHBA can be routinely analyzed in milk samples at test days, and, therefore, provides a practical tool for breeding cows less susceptible to hyperketonemia. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
SU-E-T-473: A Patient-Specific QC Paradigm Based On Trajectory Log Files and DICOM Plan Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeMarco, J; McCloskey, S; Low, D
Purpose: To evaluate a remote QC tool for monitoring treatment machine parameters and treatment workflow. Methods: The Varian TrueBeamTM linear accelerator is a digital machine that records machine axis parameters and MLC leaf positions as a function of delivered monitor unit or control point. This information is saved to a binary trajectory log file for every treatment or imaging field in the patient treatment session. A MATLAB analysis routine was developed to parse the trajectory log files for a given patient, compare the expected versus actual machine and MLC positions as well as perform a cross-comparison with the DICOM-RT planmore » file exported from the treatment planning system. The parsing routine sorts the trajectory log files based on the time and date stamp and generates a sequential report file listing treatment parameters and provides a match relative to the DICOM-RT plan file. Results: The trajectory log parsing-routine was compared against a standard record and verify listing for patients undergoing initial IMRT dosimetry verification and weekly and final chart QC. The complete treatment course was independently verified for 10 patients of varying treatment site and a total of 1267 treatment fields were evaluated including pre-treatment imaging fields where applicable. In the context of IMRT plan verification, eight prostate SBRT plans with 4-arcs per plan were evaluated based on expected versus actual machine axis parameters. The average value for the maximum RMS MLC error was 0.067±0.001mm and 0.066±0.002mm for leaf bank A and B respectively. Conclusion: A real-time QC analysis program was tested using trajectory log files and DICOM-RT plan files. The parsing routine is efficient and able to evaluate all relevant machine axis parameters during a patient treatment course including MLC leaf positions and table positions at time of image acquisition and during treatment.« less
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
Hierarchical modeling of population stability and species group attributes from survey data
Sauer, J.R.; Link, W.A.
2002-01-01
Many ecological studies require analysis of collections of estimates. For example, population change is routinely estimated for many species from surveys such as the North American Breeding Bird Survey (BBS), and the species are grouped and used in comparative analyses. We developed a hierarchical model for estimation of group attributes from a collection of estimates of population trend. The model uses information from predefined groups of species to provide a context and to supplement data for individual species; summaries of group attributes are improved by statistical methods that simultaneously analyze collections of trend estimates. The model is Bayesian; trends are treated as random variables rather than fixed parameters. We use Markov Chain Monte Carlo (MCMC) methods to fit the model. Standard assessments of population stability cannot distinguish magnitude of trend and statistical significance of trend estimates, but the hierarchical model allows us to legitimately describe the probability that a trend is within given bounds. Thus we define population stability in terms of the probability that the magnitude of population change for a species is less than or equal to a predefined threshold. We applied the model to estimates of trend for 399 species from the BBS to estimate the proportion of species with increasing populations and to identify species with unstable populations. Analyses are presented for the collection of all species and for 12 species groups commonly used in BBS summaries. Overall, we estimated that 49% of species in the BBS have positive trends and 33 species have unstable populations. However, the proportion of species with increasing trends differs among habitat groups, with grassland birds having only 19% of species with positive trend estimates and wetland birds having 68% of species with positive trend estimates.
Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT
Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.
2011-01-01
The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552
Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.
Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T
2012-02-01
The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.
Intrathoracic airway wall detection using graph search and scanner PSF information
NASA Astrophysics Data System (ADS)
Reinhardt, Joseph M.; Park, Wonkyu; Hoffman, Eric A.; Sonka, Milan
1997-05-01
Measurements of the in vivo bronchial tree can be used to assess regional airway physiology. High-resolution CT (HRCT) provides detailed images of the lungs and has been used to evaluate bronchial airway geometry. Such measurements have been sued to assess diseases affecting the airways, such as asthma and cystic fibrosis, to measure airway response to external stimuli, and to evaluate the mechanics of airway collapse in sleep apnea. To routinely use CT imaging in a clinical setting to evaluate the in vivo airway tree, there is a need for an objective, automatic technique for identifying the airway tree in the CT images and measuring airway geometry parameters. Manual or semi-automatic segmentation and measurement of the airway tree from a 3D data set may require several man-hours of work, and the manual approaches suffer from inter-observer and intra- observer variabilities. This paper describes a method for automatic airway tree analysis that combines accurate airway wall location estimation with a technique for optimal airway border smoothing. A fuzzy logic, rule-based system is used to identify the branches of the 3D airway tree in thin-slice HRCT images. Raycasting is combined with a model-based parameter estimation technique to identify the approximate inner and outer airway wall borders in 2D cross-sections through the image data set. Finally, a 2D graph search is used to optimize the estimated airway wall locations and obtain accurate airway borders. We demonstrate this technique using CT images of a plexiglass tube phantom.
Teachers' Estimates of Candidates' Grades: Curriculum 2000 Advanced Level Qualifications
ERIC Educational Resources Information Center
Dhillon, Debra
2005-01-01
In the UK, estimated grades have long been provided to higher education establishments as part of their entry procedures. Since 1994 they have also been routinely collected by awarding bodies to facilitate the grade-awarding process. Analyses of required estimates to a British awarding body revealed that teachers' estimates of candidates'…
[Multiparametric 3T MRI in the routine staging of prostate cancer].
Largeron, J P; Galonnier, F; Védrine, N; Alfidja, A; Boyer, L; Pereira, B; Boiteux, J P; Kemeny, J L; Guy, L
2014-03-01
To analyse the detection ability of a multiparametric 3T MRI with phased-array coil in comparison with the pathological data provided by the prostatectomy specimens. Prospective study of 30 months, including 74 patients for whom a diagnosis of prostate cancer had been made on randomized prostate biopsies, and all eligible to a radical prostatectomy. They all underwent multiparametric 3T MRI with pelvic phased-array coil including T2-weighted imaging (T2W), dynamic contrast-enhanced (DCE) and diffusion-weighted imaging (DWI) with an ADC mapping. Each gland was divided in octants. Three specific criteria have been sought (detection ability, capsular contact [CC] and extracapsular extension [ECE]), in comparison with the pathological data provided by the prostatectomy specimens. Five hundred and ninety-two octants were considered with 124 significant tumors (volume ≥ 0.1cm(3)). The general ability of tumor detection had a sensitivity, specificity, PPV and NPV respectively to 72.3%, 87.4%, 83.2% and 78.5%. The estimate of the CC and ECE had a high negative predictive power with specificities and VPN respectively to 96.4% and 95.4% for CC, and 97.5 and 97.7% for ECE. Multiparametric 3T MRI with pelvic phased-array coil appeared to be a reliable imaging technique in clinical and routine practice for the detection of localized prostate cancer. Estimation of the CC and millimeter ECE remains to be clarified, even if the negative predictive power for these parameters seems encouraging. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Mineral dust transport in the Arctic modelled with FLEXPART
NASA Astrophysics Data System (ADS)
Groot Zwaaftink, Christine; Grythe, Henrik; Stohl, Andreas
2016-04-01
Aeolian transport of mineral dust is suggested to play an important role in many processes. For instance, mineral aerosols affect the radiation balance of the atmosphere, and mineral deposits influence ice sheet mass balances and terrestrial and ocean ecosystems. While many efforts have been done to model global dust transport, relatively little attention has been given to mineral dust in the Arctic. Even though this region is more remote from the world's major dust sources and dust concentrations may be lower than elsewhere, effects of mineral dust on for instance the radiation balance can be highly relevant. Furthermore, there are substantial local sources of dust in or close to the Arctic (e.g., in Iceland), whose impact on Arctic dust concentrations has not been studied in detail. We therefore aim to estimate contributions of different source regions to mineral dust in the Arctic. We have developed a dust mobilization routine in combination with the Lagrangian dispersion model FLEXPART to make such estimates. The lack of details on soil properties in many areas requires a simple routine for global simulations. However, we have paid special attention to the dust sources on Iceland. The mobilization routine does account for topography, snow cover and soil moisture effects, in addition to meteorological parameters. FLEXPART, driven with operational meteorological data from European Centre for Medium-Range Weather Forecasts, was used to do a three-year global dust simulation for the years 2010 to 2012. We assess the model performance in terms of surface concentration and deposition at several locations spread over the globe. We will discuss how deposition and dust load patterns in the Arctic change throughout seasons based on the source of the dust. Important source regions for mineral dust found in the Arctic are not only the major desert areas, such as the Sahara, but also local bare-soil regions. From our model results, it appears that total dust load in the Arctic atmosphere is dominated by dust from Africa and Asia. However, in the lower atmosphere, local sources also contribute strongly to dust concentrations. Especially from Iceland, significant amounts of dust are mobilized. These local sources with relatively shallow transport of dust also affect the spatial distribution of dust deposition. For instance, model estimates show that in autumn and winter most of the deposited dust in Greenland originates from sources north of 60 degrees latitude.
Fundamental Parameters of Nearby Young Stars
NASA Astrophysics Data System (ADS)
McCarthy, Kyle; Wilhelm, R. J.
2013-06-01
We present high resolution (R ~ 60,000) spectroscopic data of F and G members of the nearby, young associations AB Doradus and β Pictoris obtained with the Cross-Dispersed Echelle Spectrograph on the 2.7 meter telescope at the McDonald Observatory. Effective temperatures, log(g), [Fe/H], and microturbulent velocities are first estimated using the TGVIT code, then finely tuned using MOOG. Equivalent width (EW) measurements were made using TAME alongside a self-produced IDL routine to constrain EW accuracy and improve computed fundamental parameters. MOOG is also used to derive the chemical abundance of several elements including Mn which is known to be over abundant in planet hosting stars. Vsin(i) are also computed using a χ2 analysis of our observed data to Atlas9 model atmospheres passed through the SPECTRUM spectral synthesis code on lines which do not depend strongly on surface gravity. Due to the limited number of Fe II lines which govern the surface gravity fit in both TGVIT and MOOG, we implement another χ2 analysis of strongly log(g) dependent lines to ensure the values are correct. Coupling the surface gravities and temperatures derived in this study with the luminosities found in the Tycho-2 catalog, we estimate masses for each star and compare these masses to several evolutionary models to begin the process of constraining pre-main sequence evolutionary models.
Near Real-Time Earthquake Exposure and Damage Assessment: An Example from Turkey
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Çomoǧlu, Mustafa; Erdik, Mustafa
2014-05-01
Confined by infamous strike-slip North Anatolian Fault from the north and by the Hellenic subduction trench from the south Turkey is one of the most seismically active countries in Europe. Due this increased exposure and the fragility of the building stock Turkey is among the top countries exposed to earthquake hazard in terms of mortality and economic losses. In this study we focus recent and ongoing efforts to mitigate the earthquake risk in near real-time. We present actual results of recent earthquakes, such as the M6 event off-shore Antalya which occurred on 28 December 2013. Starting at the moment of detection, we obtain a preliminary ground motion intensity distribution based on epicenter and magnitude. Our real-time application is further enhanced by the integration of the SeisComp3 ground motion parameter estimation tool with the Earthquake Loss Estimation Routine (ELER). SeisComp3 provides the online station parameters which are then automatically incorporated into the ShakeMaps produced by ELER. The resulting ground motion distributions are used together with the building inventory to calculate expected number of buildings in various damage states. All these analysis are conducted in an automated fashion and are communicated within a few minutes of a triggering event. In our efforts to disseminate earthquake information to the general public we make extensive use of social networks such as Tweeter and collaborate with mobile phone operators.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Characterizing measles transmission in India: a dynamic modeling study using verbal autopsy data.
Verguet, Stéphane; Jones, Edward O; Johri, Mira; Morris, Shaun K; Suraweera, Wilson; Gauvreau, Cindy L; Jha, Prabhat; Jit, Mark
2017-08-10
Decreasing trends in measles mortality have been reported in recent years. However, such estimates of measles mortality have depended heavily on assumed regional measles case fatality risks (CFRs) and made little use of mortality data from low- and middle-income countries in general and India, the country with the highest measles burden globally, in particular. We constructed a dynamic model of measles transmission in India with parameters that were empirically inferred using spectral analysis from a time series of measles mortality extracted from the Million Death Study, an ongoing longitudinal study recording deaths across 2.4 million Indian households and attributing causes of death using verbal autopsy. The model was then used to estimate the measles CFR, the number of measles deaths, and the impact of vaccination in 2000-2015 among under-five children in India and in the states of Bihar and Uttar Pradesh (UP), two states with large populations and the highest numbers of measles deaths in India. We obtained the following estimated CFRs among under-five children for the year 2005: 0.63% (95% confidence interval (CI): 0.40-1.00%) for India as a whole, 0.62% (0.38-1.00%) for Bihar, and 1.19% (0.80-1.75%) for UP. During 2000-2015, we estimated that 607,000 (95% CI: 383,000-958,000) under-five deaths attributed to measles occurred in India as a whole. If no routine vaccination or supplemental immunization activities had occurred from 2000 to 2015, an additional 1.6 (1.0-2.6) million deaths for under-five children would have occurred across India. We developed a data- and model-driven estimation of the historical measles dynamics, CFR, and vaccination impact in India, extracting the periodicity of epidemics using spectral and coherence analysis, which allowed us to infer key parameters driving measles transmission dynamics and mortality.
Breast dosimetry in clinical mammography
NASA Astrophysics Data System (ADS)
Benevides, Luis Alberto Do Rego
The objective of this study was show that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. In the study, AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The protocol proposes the use of a fiber-optic coupled (FOCD) or Metal Oxide Semiconductor Field Effect Transistor (MOSFET) dosimeter to measure the entrance skin exposure at the time of the mammogram without interfering with diagnostic information of the mammogram. The study showed that FOCD had sensitivity with less than 7% energy dependence, linear in all tube current-time product stations, and was reproducible within 2%. FOCD was superior to MOSFET dosimeter in sensitivity, reusability, and reproducibility. The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. In addition, the study population anthropometric measurements enabled the development of analytical equations to calculate the whole breast area, estimate for the skin layer thickness and optimal location for automatic exposure control ionization chamber. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.
Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valencia, Jayson F.; Dirks, James A.
2008-08-29
EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less
Kilometer-Spaced GNSS Array for Ionospheric Irregularity Monitoring
NASA Astrophysics Data System (ADS)
Su, Yang
This dissertation presents automated, systematic data collection, processing, and analysis methods for studying the spatial-temporal properties of Global Navigation Satellite Systems (GNSS) scintillations produced by ionospheric irregularities at high latitudes using a closely spaced multi-receiver array deployed in the northern auroral zone. The main contributions include 1) automated scintillation monitoring, 2) estimation of drift and anisotropy of the irregularities, 3) error analysis of the drift estimates, and 4) multi-instrument study of the ionosphere. A radio wave propagating through the ionosphere, consisting of ionized plasma, may suffer from rapid signal amplitude and/or phase fluctuations known as scintillation. Caused by non-uniform structures in the ionosphere, intense scintillation can lead to GNSS navigation and high-frequency (HF) communication failures. With specialized GNSS receivers, scintillation can be studied to better understand the structure and dynamics of the ionospheric irregularities, which can be parameterized by altitude, drift motion, anisotropy of the shape, horizontal spatial extent and their time evolution. To study the structuring and motion of ionospheric irregularities at the sub-kilometer scale sizes that produce L-band scintillations, a closely-spaced GNSS array has been established in the auroral zone at Poker Flat Research Range, Alaska to investigate high latitude scintillation and irregularities. Routinely collecting low-rate scintillation statistics, the array database also provides 100 Hz power and phase data for each channel at L1/L2C frequency. In this work, a survey of seasonal and hourly dependence of L1 scintillation events over the course of a year is discussed. To efficiently and systematically study scintillation events, an automated low-rate scintillation detection routine is established and performed for each day by screening the phase scintillation index. The spaced-receiver technique is applied to cross-correlated phase and power measurements from GNSS receivers. Results of horizontal drift velocities and anisotropy ellipses derived from the parameters are shown for several detected events. Results show the possibility of routinely quantifying ionospheric irregularities by drifts and anisotropy. Error analysis on estimated properties is performed to further evaluate the estimation quality. Uncertainties are quantified by ensemble simulation of noise on the phase signals carried through to the observations of the spaced-receiver linear system. These covariances are then propagated through to uncertainties on drifts. A case study of a single scintillating satellite observed by the array is used to demonstrate the uncertainty estimation process. The distributed array is used in coordination with other measuring techniques such as incoherent scatter radar and optical all-sky imagers. These scintillations are correlated with auroral activity, based on all-sky camera images. Measurements and uncertainty estimates made over a 30-minute period are made and compared to a collocated incoherent scatter radar, and show good agreement in horizontal drift speed and direction during periods of scintillation for cases when the characteristic velocity is less than the drift velocity. The methods demonstrated are extensible to other zones and other GNSS arrays of varying size, number, ground distribution, and transmitter frequency.
UV-VIS absorption spectroscopy: Lambert-Beer reloaded
NASA Astrophysics Data System (ADS)
Mäntele, Werner; Deniz, Erhan
2017-02-01
UV-VIS absorption spectroscopy is used in almost every spectroscopy laboratory for routine analysis or research. All spectroscopists rely on the Lambert-Beer Law but many of them are less aware of its limitations. This tutorial discusses typical problems in routine spectroscopy that come along with technical limitations or careless selection of experimental parameters. Simple rules are provided to avoid these problems.
NASA Astrophysics Data System (ADS)
Reeve, A. S.; Martin, D.; Smith, S. M.
2013-12-01
Surface waters within the Sebago Lake watershed (southern Maine, USA) provide a variety of economically and intrinsically valuable recreational, commercial and environmental services. Different stakeholder groups for the 118 km2 Sebago Lake and surrounding watershed advocate for different lake and watershed management strategies, focusing on the operation of a dam at the outflow from Sebago Lake. While lake level in Sebago Lake has been monitored for over a century, limited data is available on the hydrologic processes that drive lake level and therefore impact how dam operation (and other changes to the region) will influence the hydroperiod of the lake. To fill this information gap several tasks were undertaken including: 1) deploying data logging pressure transducers to continuously monitor stream stage in nine tributaries, 2) measuring stream discharge at these sites to create rating curves for the nine tributaries, and using the resulting continuous discharge records to 3) calibrate lumped parameter computer models based on the GR4J model, modified to include a degree-day snowmelt routine. These lumped parameter models have been integrated with a simple lake water-balance model to estimate lake level and its response to different scenarios including dam management strategies. To date, about three years of stream stage data have been used to estimate stream discharge in all monitored tributaries (data collection is ongoing). Baseflow separation indices (BFI) for 2010 and 2011 using the USGS software PART and the Eckhart digital filter in WHAT range from 0.80-0.86 in the Crooked River and Richmill Outlet,followed by Northwest (0.75) and Muddy (0.53-0.56) Rivers, with the lowest BFI measured in Sticky River (0.41-0.56). The BFI values indicate most streams have significant groundwater (or other storage) inputs. The lumped parameter watershed model has been calibrated for four streams (Nash-Sutcliffe = 0.4 to 0.9), with the other major tributaries containing hydraulic structures that are not included in the lumped parameter model. Calibrated watershed models tend to substantially underestimate the highest streamflows while overestimating low flows. An early June 2012 event caused extremely high flows with discharge in the Crooked River (the most significant tributary) peaking at about 85 m3/day. The lumped parameter model dramatically underestimated this important and anomalous event, but provided a reasonable prediction of flows throughout the rest of 2012. Ongoing work includes incorporating hydraulic structures in the lumped parameter model and using the available data to drive the lake water-balance model that has been prepared.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
NASA Astrophysics Data System (ADS)
Yu, M.; Wu, B.
2017-12-01
As an important part of the coupled Eco-Hydrological processes, evaporation is the bond for exchange of energy and heat between the surface and the atmosphere. However, the estimation of evaporation remains a challenge compared with other main hydrological factors in water cycle. The complementary relationship which proposed by Bouchet (1963) has laid the foundation for various approaches to estimate evaporation from land surfaces, the essence of the principle is a relationship between three types of evaporation in the environment. It can simply implemented with routine meteorological data without the need for resistance parameters of the vegetation and bare land, which are difficult to observed and complicated to estimate in most surface flux models. On this basis the generalized nonlinear formulation was proposed by Brutsaert (2015). The daily evaporation can be estimated once the potential evaporation (Epo) and apparent potential evaporation (Epa) are known. The new formulation has a strong physical basis and can be expected to perform better under natural water stress conditions, nevertheless, the model has not been widely validated over different climate types and underlying surface patterns. In this study, we attempted to apply the generalized nonlinear complementary relationship in North China, three flux stations in North China are used for testing the universality and accuracy of this model against observed evaporation over different vegetation types, including Guantao Site, Miyun Site and Huailai Site. Guantao Site has double-cropping systems and crop rotations with summer maize and winter wheat; the other two sites are dominated by spring maize. Detailed measurements of meteorological factors at certain heights above ground surface from automatic weather stations offered necessary parameters for daily evaporation estimation. Using the Bowen ratio, the surface energy measured by the eddy covariance systems at the flux stations is adjusted on a daily scale to satisfy the surface energy closure. After calibration the estimated daily evaporation are in good agreement with EC-measured flux data with a mean correlation coefficient in excess of 0.85. The results indicate that the generalized nonlinear complementary relationship can be applied in plant growing and non-growing season in North China.
Regionalization of response routine parameters
NASA Astrophysics Data System (ADS)
Tøfte, Lena S.; Sultan, Yisak A.
2013-04-01
When area distributed hydrological models are to be calibrated or updated, fewer calibration parameters is of a considerable advantage. Based on, among others, Kirchner, we have developed a simple non-threshold response model for drainage in natural catchments, to be used in the gridded hydrological model ENKI. The new response model takes only the hydrogram into account, it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. The method is based on the assumption that in catchments where precipitation, evaporation and snowmelt is neglect able, the discharge is entirely determined by the amount of stored water. It can then be characterized as a simple first-order nonlinear dynamical system, where the governing equations can be found directly from measured stream flow fluctuations. This means that the response in the catchment can be modelled by using hydrogram data where all data from periods with rain, snowmelt or evaporation is left out, and adjust these series to a two or three parameter equation. A large number of discharge series from catchments in different regions in Norway are analyzed, and parameters found for all the series. By combining the computed parameters and known catchments characteristics, we try to regionalize the parameters. Then the parameters in the response routine can easily be found also for ungauged catchments, from maps or data bases.
NASA Astrophysics Data System (ADS)
Mercier, Lény; Panfili, Jacques; Paillon, Christelle; N'diaye, Awa; Mouillot, David; Darnaude, Audrey M.
2011-05-01
Accurate knowledge of fish age and growth is crucial for species conservation and management of exploited marine stocks. In exploited species, age estimation based on otolith reading is routinely used for building growth curves that are used to implement fishery management models. However, the universal fit of the von Bertalanffy growth function (VBGF) on data from commercial landings can lead to uncertainty in growth parameter inference, preventing accurate comparison of growth-based history traits between fish populations. In the present paper, we used a comprehensive annual sample of wild gilthead seabream ( Sparus aurata L.) in the Gulf of Lions (France, NW Mediterranean) to test a methodology improving growth modelling for exploited fish populations. After validating the timing for otolith annual increment formation for all life stages, a comprehensive set of growth models (including VBGF) were fitted to the obtained age-length data, used as a whole or sub-divided between group 0 individuals and those coming from commercial landings (ages 1-6). Comparisons in growth model accuracy based on Akaike Information Criterion allowed assessment of the best model for each dataset and, when no model correctly fitted the data, a multi-model inference (MMI) based on model averaging was carried out. The results provided evidence that growth parameters inferred with VBGF must be used with high caution. Hence, VBGF turned to be among the less accurate for growth prediction irrespective of the dataset and its fit to the whole population, the juvenile or the adult datasets provided different growth parameters. The best models for growth prediction were the Tanaka model, for group 0 juveniles, and the MMI, for the older fish, confirming that growth differs substantially between juveniles and adults. All asymptotic models failed to correctly describe the growth of adult S. aurata, probably because of the poor representation of old individuals in the dataset. Multi-model inference associated with separate analysis of juveniles and adult fish is then advised to obtain objective estimations of growth parameters when sampling cannot be corrected towards older fish.
Predicting mesoscale microstructural evolution in electron beam welding
Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena; ...
2016-03-16
Using the kinetic Monte Carlo simulator, Stochastic Parallel PARticle Kinetic Simulator, from Sandia National Laboratories, a user routine has been developed to simulate mesoscale predictions of a grain structure near a moving heat source. Here, we demonstrate the use of this user routine to produce voxelized, synthetic, three-dimensional microstructures for electron-beam welding by comparing them with experimentally produced microstructures. When simulation input parameters are matched to experimental process parameters, qualitative and quantitative agreement for both grain size and grain morphology are achieved. The method is capable of simulating both single- and multipass welds. As a result, the simulations provide anmore » opportunity for not only accelerated design but also the integration of simulation and experiments in design such that simulations can receive parameter bounds from experiments and, in turn, provide predictions of a resultant microstructure.« less
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2015-12-01
Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.
NASA Astrophysics Data System (ADS)
Ma, N.; Zhang, Y.; Szilagyi, J.; Xu, C. Y.
2015-12-01
While the land surface latent and sensible heat release in the Tibetan Plateau (TP) could greatly influence the Asian monsoon circulation, the actual evapotranspiration (ETa) information in the TP has been largely hindered by its extremely sparse ground observation network. Thus the complementary relationship (CR) theory lends great potential in estimating the ETa since it relies on solely routine meteorological observations. With the in-situ energy/water flux observation over the highest semiarid alpine steppe in the TP, the modifications of specific components within the CR were first implemented. We found that the symmetry of the CR could be achieved for dry regions of TP when (i) the Priestley-Taylor coefficient, (ii) the slope of the saturation vapor pressure curve and (iii) the wind function were locally calibrated by using the ETa observations in wet days, an estimate of the wet surface temperature and the Monin-Obukhov Similarity (MOS) theory, respectively. In this way, the error of the simulated ETa by the symmetric AA model could be decreased to a large extent. Besides, the asymmetric CR was confirmed in TP when the D20 above-ground and/or E601B sunken pan evaporation (Epan) were used as a proxy of the ETp. Thus daily ETa could also be estimated by coupling D20 above-ground and/or E601B sunken pans through CR. Additionally, to overcome the modification of the specific components in the CR, we also evaluated the Nonlinear-CR model and the Morton's CRAE model. The former does not need the pre-determination of the asymmetry of CR, while the latter does not require the wind speed data as input. We found that both models are also able to simulate the daily ETa well provided their parameter values have been locally calibrated. The sensitivity analysis shows that, if the measured ETa data are absence to calibrate the models' parameter values, the Nonlinear-CR model may be a particularly good way for estimating ETabecause of its mild sensitivity to the parameter values making possible to employ published parameter values derived under similar climatic and land cover conditions. The CRAE model should also be highlighted in the TP since the special topography make the wind speed data suffer large uncertainties when the advanced geo-statistical method was used to spatially interpolate the point-based meteorological records.
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail; Sinitsyn, Alexey; Gulev, Sergey
2014-05-01
Cloud fraction is a critical parameter for the accurate estimation of short-wave and long-wave radiation - one of the most important surface fluxes over sea and land. Massive estimates of the total cloud cover as well as cloud amount for different layers of clouds are available from visual observations, satellite measurements and reanalyses. However, these data are subject of different uncertainties and need continuous validation against highly accurate in-situ measurements. Sky imaging with high resolution fish eye camera provides an excellent opportunity for collecting cloud cover data supplemented with additional characteristics hardly available from routine visual observations (e.g. structure of cloud cover under broken cloud conditions, parameters of distribution of cloud dimensions). We present operational automatic observational package which is based on fish eye camera taking sky images with high resolution (up to 1Hz) in time and a spatial resolution of 968x648px. This spatial resolution has been justified as an optimal by several sensitivity experiments. For the use of the package at research vessel when the horizontal positioning becomes critical, a special extension of the hardware and software to the package has been developed. These modules provide the explicit detection of the optimal moment for shooting. For the post processing of sky images we developed a software realizing the algorithm of the filtering of sunburn effect in case of small and moderate could cover and broken cloud conditions. The same algorithm accurately quantifies the cloud fraction by analyzing color mixture for each point and introducing the so-called "grayness rate index" for every pixel. The accuracy of the algorithm has been tested using the data collected during several campaigns in 2005-2011 in the North Atlantic Ocean. The collection of images included more than 3000 images for different cloud conditions supplied with observations of standard parameters. The system is fully autonomous and has a block for digital data collection at the hard disk. The system has been tested for a wide range of open ocean cloud conditions and we will demonstrate some pilot results of data processing and physical interpretation of fractional cloud cover estimation.
VLBI Analysis with the Multi-Technique Software GEOSAT
NASA Technical Reports Server (NTRS)
Kierulf, Halfdan Pascal; Andersen, Per-Helge; Boeckmann, Sarah; Kristiansen, Oddgeir
2010-01-01
GEOSAT is a multi-technique geodetic analysis software developed at Forsvarets Forsknings Institutt (Norwegian defense research establishment). The Norwegian Mapping Authority has now installed the software and has, together with Forsvarets Forsknings Institutt, adapted the software to deliver datum-free normal equation systems in SINEX format. The goal is to be accepted as an IVS Associate Analysis Center and to provide contributions to the IVS EOP combination on a routine basis. GEOSAT is based on an upper diagonal factorized Kalman filter which allows estimation of time variable parameters like the troposphere and clocks as stochastic parameters. The tropospheric delays in various directions are mapped to tropospheric zenith delay using ray-tracing. Meteorological data from ECMWF with a resolution of six hours is used to perform the ray-tracing which depends both on elevation and azimuth. Other models are following the IERS and IVS conventions. The Norwegian Mapping Authority has submitted test SINEX files produced with GEOSAT to IVS. The results have been compared with the existing IVS combined products. In this paper the outcome of these comparisons is presented.
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
Examining the cost of delivering routine immunization in Honduras.
Janusz, Cara Bess; Castañeda-Orjuela, Carlos; Molina Aguilera, Ida Berenice; Felix Garcia, Ana Gabriela; Mendoza, Lourdes; Díaz, Iris Yolanda; Resch, Stephen C
2015-05-07
Many countries have introduced new vaccines and expanded their immunization programs to protect additional risk groups, thus raising the cost of routine immunization delivery. Honduras recently adopted two new vaccines, and the country continues to broaden the reach of its program to adolescents and adults. In this article, we estimate and examine the economic cost of the Honduran routine immunization program for the year 2011. The data were gathered from a probability sample of 71 health facilities delivering routine immunization, as well as 8 regional and 1 central office of the national immunization program. Data were collected on vaccinations delivered, staff time dedicated to the program, cold chain equipment and upkeep, vehicle use, infrastructure, and other recurrent and capital costs at each health facility and administrative office. Annualized economic costs were estimated from a modified societal perspective and reported in 2011 US dollars. With the addition of rotavirus and pneumococcal conjugate vaccines, the total cost for routine immunization delivery in Honduras for 2011 was US$ 32.5 million. Vaccines and related supplies accounted for 23% of the costs. Labor, cold chain, and vehicles represented 54%, 4%, and 1%, respectively. At the facility level, the non-vaccine system costs per dose ranged widely, from US$ 25.55 in facilities delivering fewer than 500 doses per year to US$ 2.84 in facilities with volume exceeding 10,000 doses per year. Cost per dose was higher in rural facilities despite somewhat lower wage rates for health workers in these settings; this appears to be driven by lower demand for services per health worker in sparsely populated areas, rather than increased cost of outreach. These more-precise estimates of the operational costs to deliver routine immunizations provide program managers with important information for mobilizing resources to help sustain the program and for improving annual planning and budgeting as well as longer-term resource allocation decisions. Copyright © 2015. Published by Elsevier Ltd.
Brenzel, Logan
2015-05-07
Immunization is one of the most cost-effective health interventions, but as countries introduce new vaccines and scale-up immunization coverage, costs will likely increase. This paper updates estimates of immunization costs and financing based on information from comprehensive multi-year plans (cMYPs) from GAVI-eligible countries during a period when countries planned to introduce a range of new vaccines (2008-2016). The analysis database included information from baseline and 5-year projection years for each country cMYP, resulting in a total sample size of 243 observations. Two-thirds were from African countries. Cost data included personnel, vaccine, injection, transport, training, maintenance, cold chain and other capital investments. Financing from government and external sources was evaluated. All estimates were converted to 2010 US Dollars. Statistical analysis was performed using STATA, and results were population-weighted. Results pertain to country planning estimates. Average annual routine immunization cost was $62 million. Vaccines continued to be the major cost driver (51%) followed by immunization-specific personnel costs (22%). Non-vaccine delivery costs accounted for almost half of routine program costs (44%). Routine delivery cost per dose averaged $0.61 and the delivery cost per infant was $10. The cost per DTP3 vaccinated child was $27. Routine program costs increased with each new vaccine introduced. Costs accounted for 5% of government health expenditures. Governments accounted for 67% of financing. Total and average costs of routine immunization programs are rising as coverage rates increase and new vaccines are introduced. The cost of delivering vaccines is nearly equivalent to the cost of vaccines. Governments are financing greater proportions of the immunization program but there may be limits in resource scarce countries. Price reductions for new vaccines will help reduce costs and the burden of financing. Strategies to improve efficiency in service delivery should be pursued. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Zubair, Mohammad
1993-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
First results of DORIS data analysis at Geodetic Observatory Pecný
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Hugentobler, Urs; Le Bail, Karine
2006-11-01
In a cooperation between the Astronomical Institute, University of Bern (AIUB), the Geodetic Observatory Pecný (GOPE), and the Institut Géographique National (IGN), DORIS data analysis capabilities were implemented into a development version of the Bernese GPS software. The DORIS Doppler observables are reformulated such that they are similar to global navigation satellite system (GNSS) carrier-phase observations, allowing the use of the same observation models and algorithms as for GNSS carrier-phase data analysis with only minor software modifications. As such, the same algorithms may be used to process DORIS carrier-phase observations. First results from the analysis of 3 weeks of DORIS data (September 2004, five DORIS-equipped satellites) at GOPE are promising and are presented here. They include the comparison of station coordinates with coordinate estimates derived by the Laboratoire d’Etudes en Géophysique et Océanographie Spatiale/Collecte Localisation Satellites analysis centre (LCA) and the Institut Géographique National/Jet Propulsion Laboratory (IGN/JPL), and the comparison of Earth orientation parameters (EOPs) with the International Earth Rotation and Reference Frames Service (IERS) C04 model. The modified Bernese results are of a slightly lower, but comparable, quality than corresponding solutions routinely computed within the IDS (International DORIS Service). The weekly coordinate repeatability RMS is of the order of 2 3 cm for each 3D station coordinate. Comparison with corresponding estimates of station coordinates from current IDS analysis centers demonstrates similar precision. Daily pole component estimates show a mean difference from IERS-C04 of 0.6 mas in X p and - 0.5 mas in Y p and a RMS of 0.8 mas in X p and 0.9 mas in Y p (mean removed). An automatic analysis procedure is under development at GOPE, and routine DORIS data processing will be implemented in the near future.
Stochastic determination of matrix determinants
NASA Astrophysics Data System (ADS)
Dorn, Sebastian; Enßlin, Torsten A.
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
NASA Technical Reports Server (NTRS)
Feldman, U.
1984-01-01
A knowledge in near real time, of the surface drag coefficient for drifting pack ice is vital for predicting its motions. And since this is not routinely available from measurements it must be replaced by estimates. Hence, a method for estimating this variable, as well as the drag coefficient at the water/ice interface and the ice thickness, for drifting open pack ice was developed. These estimates were derived from three-day sequences of LANDSAT-1 MSS images and surface weather charts and from the observed minima and maxima of these variables. The method was tested with four data sets in the southeastern Beaufort sea. Acceptable results were obtained for three data sets. Routine application of the method depends on the availability of data from an all-weather air or spaceborne remote sensing system, producing images with high geometric fidelity and high resolution.
Detecting Potential Water Quality Issues by Mapping Trophic Status Using Google Earth Engine
NASA Astrophysics Data System (ADS)
Nguy-Robertson, A. L.; Harvey, K.; Huening, V.; Robinson, H.
2017-12-01
The identification, timing, and spatial distribution of recurrent algal blooms and aquatic vegetation can help water managers and policy makers make better water resource decisions. In many parts of the world there is little monitoring or reporting of water quality due to the required costs and effort to collect and process water samples. We propose to use Google Earth Engine to quickly identify the recurrence of trophic states in global inland water systems. Utilizing Landsat and Sentinel multispectral imagery, inland water quality parameters (i.e. chlorophyll a concentration) can be estimated and waters can be classified by trophic state; oligotrophic, mesotrophic, eutrophic, and hypereutrophic. The recurrence of eutrophic and hypereutrophic observations can highlight potentially problematic locations where algal blooms or aquatic vegetation occur routinely. Eutrophic and hypereutrophic waters commonly include many harmful algal blooms and waters prone to fish die-offs from hypoxia. While these maps may be limited by the accuracy of the algorithms utilized to estimate chlorophyll a; relative comparisons at a local scale can help water managers to focus limited resources.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob
2013-01-01
This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.
Integrated Process Modeling-A Process Validation Life Cycle Companion.
Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-17
During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.
Limitations in estimating phosphorus sorption capacity from soil properties
USDA-ARS?s Scientific Manuscript database
An important component of all P loss models is how P cycling in soils is described. The P cycling routines in most models are based on the routines developed for the EPIC model over 30 years ago. EPIC was developed so that it could be parameterized with easily obtainable soil data and thus, by neces...
Multiple trait genetic evaluation of clinical mastitis in three dairy cattle breeds.
Govignon-Gion, A; Dassonneville, R; Baloche, G; Ducrocq, V
2016-04-01
In 2010, a routine genetic evaluation on occurrence of clinical mastitis in three main dairy cattle breeds-- Montbéliarde (MO), Normande (NO) and Holstein (HO)--was implemented in France. Records were clinical mastitis events reported by farmers to milk recording technicians and the analyzed trait was the binary variable describing the occurrence of a mastitis case within the first 150 days of the first three lactations. Genetic parameters of clinical mastitis were estimated for the three breeds. Low heritability estimates were found: between 2% and 4% depending on the breed. Despite its low heritability, the trait exhibits genetic variation so efficient genetic improvement is possible. Genetic correlations with other traits were estimated, showing large correlations (often>0.50, in absolute value) between clinical mastitis and somatic cell score (SCS), longevity and some udder traits. Correlation with milk yield was moderate and unfavorable (ρ=0.26 to 0.30). High milking speed was genetically associated with less mastitis in MO (ρ=-0.14) but with more mastitis in HO (ρ=0.18). A two-step approach was implemented for routine evaluation: first, a univariate evaluation based on a linear animal model with permanent environment effect led to pre-adjusted records (defined as records corrected for all non-genetic effects) and associated weights. These data were then combined with similar pre-adjusted records for others traits in a multiple trait BLUP animal model. The combined breeding values for clinical mastitis obtained are the official (published) ones. Mastitis estimated breeding values (EBV) were then combined with SCSs EBV into an udder health index, which receives a weight of 14.5% to 18.5% in the French total merit index (ISU) of the three breeds. Interbull genetic correlations for mastitis occurrence were very high (ρ=0.94) with Nordic countries, where much stricter recording systems exist reflecting a satisfactory quality of phenotypes as reported by the farmers. They were lower (around 0.80) with countries supplying SCS as a proxy for the international evaluation on clinical mastitis.
Choi, Soo An; Yun, Hwi-yeol; Lee, Eun Sook; Shin, Wan Gyoon
2014-03-01
Safe and effective use of digoxin in hospitalized populations requires information about the drug's pharmacokinetics and the influence of various factors on drug disposition. However, no attempts have been made to link an individual's digoxin requirements with nutritional status. The main goal of this study was to estimate the population pharmacokinetics of digoxin and to identify the nutritional status that explains pharmacokinetic variability in hospitalized Korean patients. Routine therapeutic drug-monitoring data from 106 patients who received oral digoxin at Seoul National University Bundang Hospital were retrospectively collected. The pharmacokinetics of digoxin were analyzed with a 1-compartment, open-label pharmacokinetic model by using a nonlinear mixed-effects modeling tool (NONMEM) and a multiple trough screening approach. The effect of demographic characteristics and biochemical and nutritional indices were explored. Estimates generated by using NONMEM indicated that the CL/F of digoxin was influenced by renal function, serum potassium, age, and percentage of ideal body weight (PIBW). These influences could be modeled by following the equation CL/F (L/h) = 1.36 × (creatinine clearance/50)(1.580) × K(0.835) × 0.055 × (age/65) × (PIBW/100)(0.403). The interindividual %CV for CL/F was 34.3%, and the residual variability (SD) between observed and predicted concentrations was 0.225 μg/L. The median estimates from a bootstrap procedure were comparable and within 5% of the estimates from NONMEM. Correlation analysis with the validation group showed a linear correlation between observed and predicted values. The use of this model in routine therapeutic drug monitoring requires that certain conditions be met which are consistent with the conditions of the subpopulations in the present study. Therefore, further studies are needed to clarify the effects of nutritional status on digoxin pharmacokinetics. The present study established important sources of variability in digoxin pharmacokinetics and highlighted the relationship between pharmacokinetic parameters and nutritional status in hospitalized Korean patients. Copyright © 2014 Elsevier HS Journals, Inc. All rights reserved.
GASPLOT - A computer graphics program that draws a variety of thermophysical property charts
NASA Technical Reports Server (NTRS)
Trivisonno, R. J.; Hendricks, R. C.
1977-01-01
A FORTRAN V computer program, written for the UNIVAC 1100 series, is used to draw a variety of precision thermophysical property charts on the Calcomp plotter. In addition to the program (GASPLOT), which requires (15 160) sub 10 storages, a thermophysical properties routine needed to produce plots. The program is designed so that any two of the state variables, the derived variables, or the transport variables may be plotted as the ordinate - abscissa pair with as many as five parametric variables. The parameters may be temperature, pressure, density, enthalpy, and entropy. Each parameter may have as many a 49 values, and the range of the variables is limited only by the thermophysical properties routine.
NASA Astrophysics Data System (ADS)
Papanikolaou, Xanthos; Anastasiou, Demitris; Marinou, Aggeliki; Zacharis, Vangelis; Paradissis, Demitris
2015-04-01
Dionysos Satellite Observatory and Higher Geodesy Laboratory of the National Technical University of Athens, have developed an automated processing scheme to accommodate the daily analysis of all available continuous GNSS stations in Greece. For the moment, a total of approximately 150 regional stations are processed, divided in 4 subnetworks. GNSS data are processed routinely on a daily basis, via Bernese GNSS Software v5.0, developed by AIUB. Each network is solved twice, within a period of 20 days, first using ultra-rapid products (with a latency of ~10 hours) and then using final products (with a latency of ~20 days). Observations are processed using carrier phase, modelled to double differences in the ionosphere-free linear combination. Analysis results, include coordinate estimates, ionospheric corrections (TEC maps) and hourly tropospheric parameters (zenith delay). This processing scheme, has proved helpful in investigating in near real-time abrupt geophysical phenomena, as in the 2011 Santorini inflation episode and the 2014 Kephalonia earthquake events. All analysis results and products are made available via a dedicated webpage. Additionally, most of the GNSS data are hosted in a GSAC web platform, available to all interested parties. Data and results are made available through the laboratory's dedicated website: http://dionysos.survey.ntua.gr/.
Automatic detection of kidney in 3D pediatric ultrasound images using deep neural networks
NASA Astrophysics Data System (ADS)
Tabrizi, Pooneh R.; Mansoor, Awais; Biggs, Elijah; Jago, James; Linguraru, Marius George
2018-02-01
Ultrasound (US) imaging is the routine and safe diagnostic modality for detecting pediatric urology problems, such as hydronephrosis in the kidney. Hydronephrosis is the swelling of one or both kidneys because of the build-up of urine. Early detection of hydronephrosis can lead to a substantial improvement in kidney health outcomes. Generally, US imaging is a challenging modality for the evaluation of pediatric kidneys with different shape, size, and texture characteristics. The aim of this study is to present an automatic detection method to help kidney analysis in pediatric 3DUS images. The method localizes the kidney based on its minimum volume oriented bounding box) using deep neural networks. Separate deep neural networks are trained to estimate the kidney position, orientation, and scale, making the method computationally efficient by avoiding full parameter training. The performance of the method was evaluated using a dataset of 45 kidneys (18 normal and 27 diseased kidneys diagnosed with hydronephrosis) through the leave-one-out cross validation method. Quantitative results show the proposed detection method could extract the kidney position, orientation, and scale ratio with root mean square values of 1.3 +/- 0.9 mm, 6.34 +/- 4.32 degrees, and 1.73 +/- 0.04, respectively. This method could be helpful in automating kidney segmentation for routine clinical evaluation.
A Gendered Lifestyle-Routine Activity Approach to Explaining Stalking Victimization in Canada.
Reyns, Bradford W; Henson, Billy; Fisher, Bonnie S; Fox, Kathleen A; Nobles, Matt R
2016-05-01
Research into stalking victimization has proliferated over the last two decades, but several research questions related to victimization risk remain unanswered. Accordingly, the present study utilized a lifestyle-routine activity theoretical perspective to identify risk factors for victimization. Gender-based theoretical models also were estimated to assess the possible moderating effects of gender on the relationship between lifestyle-routine activity concepts and victimization risk. Based on an analysis of a representative sample of more than 15,000 residents of Canada from the Canadian General Social Survey (GSS), results suggested conditional support for lifestyle-routine activity theory and for the hypothesis that predictors of stalking victimization may be gender based. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Impact of operator on determining functional parameters of nuclear medicine procedures.
Mohammed, A M; Naddaf, S Y; Mahdi, F S; Al-Mutawa, Q I; Al-Dossary, H A; Elgazzar, A H
2006-01-01
The study was designed to assess the significance of the interoperator variability in the estimation of functional parameters for four nuclear medicine procedures. Three nuclear medicine technologists with varying years of experience processed the following randomly selected 20 cases with diverse functions of each study type: renography, renal cortical scans, myocardial perfusion gated single-photon emission computed tomography (MP-GSPECT) and gated blood pool ventriculography (GBPV). The technologists used the same standard processing routines and were blinded to the results of each other. The means of the values and the means of differences calculated case by case were statistically analyzed by one-way ANOVA. The values were further analyzed using Pearson correlation. The range of the mean values and standard deviation of relative renal function obtained by the three technologists were 50.65 +/- 3.9 to 50.92 +/- 4.4% for renography, 51.43 +/- 8.4 to 51.55 +/- 8.8% for renal cortical scans, 57.40 +/- 14.3 to 58.30 +/- 14.9% for left ventricular ejection fraction from MP-GSPECT and 54.80 +/- 12.8 to 55.10 +/- 13.1% for GBPV. The difference was not statistically significant, p > 0.9. The values showed a high correlation of more than 0.95. Calculated case by case, the mean of differences +/- SD was found to range from 0.42 +/- 0.36% in renal cortical scans to 1.35 +/- 0.87% in MP-GSPECT with a maximum difference of 4.00%. The difference was not statistically significant, p > 0.19. The estimated functional parameters were reproducible and operator independent as long as the standard processing instructions were followed. Copyright 2006 S. Karger AG, Basel.
Sawadogo, Souleymane; Makumbi, Boniface; Purfield, Anne; Ndjavera, Christophine; Mutandi, Gram; Maher, Andrew; Kaindjee-Tjituka, Francina; Kaplan, Jonathan E; Park, Benjamin J; Lowrance, David W
2016-01-01
Cryptococcal meningitis is common and associated with high mortality among HIV infected persons. The World Health Organization recommends that routine Cryptococcal antigen (CrAg) screening in ART-naïve adults with a CD4+ count <100 cells/μL followed by pre-emptive antifungal therapy for CrAg-positive patients be considered where CrAg prevalence is ≥3%. The prevalence of CrAg among HIV adults in Namibia is unknown. We estimated CrAg prevalence among HIV-infected adults receiving care in Namibia for the purpose of informing routine screening strategies. The study design was cross-sectional. De-identified plasma specimens collected for routine CD4+ testing from HIV-infected adults enrolled in HIV care at 181 public health facilities from November 2013 to January 2014 were identified at the national reference laboratory. Remnant plasma from specimens with CD4+ counts <200 cells/μL were sampled and tested for CrAg using the IMMY® Lateral Flow Assay. CrAg prevalence was estimated and assessed for associations with age, sex, and CD4+ count. A total of 825 specimens were tested for CrAg. The median (IQR) age of patients from whom specimens were collected was 38 (32-46) years, 45.9% were female and 62.9% of the specimens had CD4 <100 cells/μL. CrAg prevalence was 3.3% overall and 3.9% and 2.3% among samples with CD4+ counts of CD4+<100 cells/μL and 100-200 cells/μL, respectively. CrAg positivity was significantly higher among patients with CD4+ cells/μL < 50 (7.2%, P = 0.001) relative to those with CD4 cells/μL 50-200 (2.2%). This is the first study to estimate CrAg prevalence among HIV-infected patients in Namibia. CrAg prevalence of ≥3.0% among patients with CD4+<100 cells/μL justifies routine CrAg screening and preemptive treatment among HIV-infected in Namibia in line with WHO recommendations. Patients with CD4+<100 cells/μL have a significantly greater risk for CrAg positivity. Revised guidelines for ART in Namibia now recommend routine screening for CrAg.
Chabiniok, Radomir; Wang, Vicky Y; Hadjicharalambous, Myrianthi; Asner, Liya; Lee, Jack; Sermesant, Maxime; Kuhl, Ellen; Young, Alistair A; Moireau, Philippe; Nash, Martyn P; Chapelle, Dominique; Nordsletten, David A
2016-04-06
With heart and cardiovascular diseases continually challenging healthcare systems worldwide, translating basic research on cardiac (patho)physiology into clinical care is essential. Exacerbating this already extensive challenge is the complexity of the heart, relying on its hierarchical structure and function to maintain cardiovascular flow. Computational modelling has been proposed and actively pursued as a tool for accelerating research and translation. Allowing exploration of the relationships between physics, multiscale mechanisms and function, computational modelling provides a platform for improving our understanding of the heart. Further integration of experimental and clinical data through data assimilation and parameter estimation techniques is bringing computational models closer to use in routine clinical practice. This article reviews developments in computational cardiac modelling and how their integration with medical imaging data is providing new pathways for translational cardiac modelling.
User manual for Blossom statistical package for R
Talbert, Marian; Cade, Brian S.
2005-01-01
Blossom is an R package with functions for making statistical comparisons with distance-function based permutation tests developed by P.W. Mielke, Jr. and colleagues at Colorado State University (Mielke and Berry, 2001) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U.S. Geological Survey. This manual is intended to provide identical documentation of the statistical methods and interpretations as the manual by Cade and Richards (2005) does for the original Fortran program, but with changes made with respect to command inputs and outputs to reflect the new implementation as a package for R (R Development Core Team, 2012). This implementation in R has allowed for numerous improvements not supported by the Cade and Richards (2005) Fortran implementation, including use of categorical predictor variables in most routines.
Three dimensional calculation of thermonuclear ignition conditions for magnetized targets
NASA Astrophysics Data System (ADS)
Cortez, Ross; Cassibry, Jason; Lapointe, Michael; Adams, Robert
2017-10-01
Fusion power balance calculations, often performed using analytic methods, are used to estimate the design space for ignition conditions. In this paper, fusion power balance is calculated utilizing a 3-D smoothed particle hydrodynamics code (SPFMax) incorporating recent stopping power routines. Effects of thermal conduction, multigroup radiation emission and nonlocal absorption, ion/electron thermal equilibration, and compressional work are studied as a function of target and liner parameters and geometry for D-T, D-D, and 6LI-D fuels to identify the potential ignition design space. Here, ignition is defined as the condition when fusion particle deposition equals or exceeds the losses from heat conduction and radiation. The simulations are in support of ongoing research with NASA to develop advanced propulsion systems for rapid interplanetary space travel. Supported by NASA Innovative Advanced Concepts and NASA Marshall Space Flight Center.
NASA Astrophysics Data System (ADS)
Smith, Shawn; Bourassa, Mark
2014-05-01
The development of a new surface flux dataset based on underway meteorological observations from research vessels will be presented. The research vessel data center at the Florida State University routinely acquires, quality controls, and distributes underway surface meteorological and oceanographic observations from over 30 oceanographic vessels. These activities are coordinated by the Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative in partnership with the Rolling Deck to Repository (R2R) project. Recently, the SAMOS data center has used these underway observations to produce bulk flux estimates for each vessel along individual cruise tracks. A description of this new flux product, along with the underlying data quality control procedures applied to SAMOS observations, will be provided. Research vessels provide underway observations at high-temporal frequency (1 min. sampling interval) that include navigational (position, course, heading, and speed), meteorological (air temperature, humidity, wind, surface pressure, radiation, rainfall), and oceanographic (surface sea temperature and salinity) samples. Vessels recruited to the SAMOS initiative collect a high concentration of data within the U.S. continental shelf and also frequently operate well outside routine shipping lanes, capturing observations in extreme ocean environments (Southern, Arctic, South Atlantic, and South Pacific oceans). These observations are atypical for their spatial and temporal sampling, making them very useful for many applications including validation of numerical models and satellite retrievals, as well as local assessments of natural variability. Individual SAMOS observations undergo routine automated quality control and select vessels receive detailed visual data quality inspection. The result is a quality-flagged data set that is ideal for calculating turbulent flux estimates. We will describe the bulk flux algorithms that have been applied to the observations and the choices of constants that are used. Analysis of the preliminary SAMOS flux products will be presented, including spatial and temporal coverage for each derived parameter. The unique quality and sampling locations of research vessel observations and their independence from many models and products makes them ideal for validation studies. The strengths and limitations of research observations for flux validation studies will be discussed. The authors welcome a discussion with the flux community regarding expansion of the SAMOS program to include additional international vessels, thus facilitating and expansion of this research vessel-based flux product.
Mookiah, M R K; Rohrmeier, A; Dieckmeyer, M; Mei, K; Kopp, F K; Noel, P B; Kirschke, J S; Baum, T; Subburaj, K
2018-04-01
This study investigated the feasibility of opportunistic osteoporosis screening in routine contrast-enhanced MDCT exams using texture analysis. The results showed an acceptable reproducibility of texture features, and these features could discriminate healthy/osteoporotic fracture cohort with an accuracy of 83%. This aim of this study is to investigate the feasibility of opportunistic osteoporosis screening in routine contrast-enhanced MDCT exams using texture analysis. We performed texture analysis at the spine in routine MDCT exams and investigated the effect of intravenous contrast medium (IVCM) (n = 7), slice thickness (n = 7), the long-term reproducibility (n = 9), and the ability to differentiate healthy/osteoporotic fracture cohort (n = 9 age and gender matched pairs). Eight texture features were extracted using gray level co-occurrence matrix (GLCM). The independent sample t test was used to rank the features of healthy/fracture cohort and classification was performed using support vector machine (SVM). The results revealed significant correlations between texture parameters derived from MDCT scans with and without IVCM (r up to 0.91) slice thickness of 1 mm versus 2 and 3 mm (r up to 0.96) and scan-rescan (r up to 0.59). The performance of the SVM classifier was evaluated using 10-fold cross-validation and revealed an average classification accuracy of 83%. Opportunistic osteoporosis screening at the spine using specific texture parameters (energy, entropy, and homogeneity) and SVM can be performed in routine contrast-enhanced MDCT exams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonfrate, A; Farah, J; Sayah, R
2015-06-15
Purpose: Development of a parametric equation suitable for a daily use in routine clinic to provide estimates of stray neutron doses in proton therapy. Methods: Monte Carlo (MC) calculations using the UF-NCI 1-year-old phantom were exercised to determine the variation of stray neutron doses as a function of irradiation parameters while performing intracranial treatments. This was done by individually changing the proton beam energy, modulation width, collimator aperture and thickness, compensator thickness and the air gap size while their impact on neutron doses were put into a single equation. The variation of neutron doses with distance from the target volumemore » was also included in it. Then, a first step consisted in establishing the fitting coefficients by using 221 learning data which were neutron absorbed doses obtained with MC simulations while a second step consisted in validating the final equation. Results: The variation of stray neutron doses with irradiation parameters were fitted with linear, polynomial, etc. model while a power-law model was used to fit the variation of stray neutron doses with the distance from the target volume. The parametric equation fitted well MC simulations while establishing fitting coefficients as the discrepancies on the estimate of neutron absorbed doses were within 10%. The discrepancy can reach ∼25% for the bladder, the farthest organ from the target volume. Finally, the validation showed results in compliance with MC calculations since the discrepancies were also within 10% for head-and-neck and thoracic organs while they can reach ∼25%, again for pelvic organs. Conclusion: The parametric equation presents promising results and will be validated for other target sites as well as other facilities to go towards a universal method.« less
Prediction of the area affected by earthquake-induced landsliding based on seismological parameters
NASA Astrophysics Data System (ADS)
Marc, Odin; Meunier, Patrick; Hovius, Niels
2017-07-01
We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.
NASA Astrophysics Data System (ADS)
Bo, Zhang; Li, Jin-Ling; Wang, Guan-Gli
2002-01-01
We checked the dependence of the estimation of parameters on the choice of piecewise interval in the continuous piecewise linear modeling of the residual clock and atmosphere effects by single analysis of 27 VLBI experiments involving Shanghai station (Seshan 25m). The following are tentatively shown: (1) Different choices of the piecewise interval lead to differences in the estimation of station coordinates and in the weighted root mean squares ( wrms ) of the delay residuals, which can be of the order of centimeters or dozens of picoseconds respectively. So the choice of piecewise interval should not be arbitrary . (2) The piecewise interval should not be too long, otherwise the short - term variations in the residual clock and atmospheric effects can not be properly modeled. While in order to maintain enough degrees of freedom in parameter estimation, the interval can not be too short, otherwise the normal equation may become near or solely singular and the noises can not be constrained as well. Therefore the choice of the interval should be within some reasonable range. (3) Since the conditions of clock and atmosphere are different from experiment to experiment and from station to station, the reasonable range of the piecewise interval should be tested and chosen separately for each experiment as well as for each station by real data analysis. This is really arduous work in routine data analysis. (4) Generally speaking, with the default interval for clock as 60min, the reasonable range of piecewise interval for residual atmospheric effect modeling is between 10min to 40min, while with the default interval for atmosphere as 20min, that for residual clock behavior is between 20min to 100min.
Transmission dynamics and economics of rabies control in dogs and humans in an African city.
Zinsstag, J; Dürr, S; Penny, M A; Mindekem, R; Roth, F; Menendez Gonzalez, S; Naissengar, S; Hattendorf, J
2009-09-01
Human rabies in developing countries can be prevented through interventions directed at dogs. Potential cost-savings for the public health sector of interventions aimed at animal-host reservoirs should be assessed. Available deterministic models of rabies transmission between dogs were extended to include dog-to-human rabies transmission. Model parameters were fitted to routine weekly rabid-dog and exposed-human cases reported in N'Djaména, the capital of Chad. The estimated transmission rates between dogs (beta(d)) were 0.0807 km2/(dogs x week) and between dogs and humans (beta(dh)) 0.0002 km2/(dogs x week). The effective reproductive ratio (R(e)) at the onset of our observations was estimated at 1.01, indicating low-level endemic stability of rabies transmission. Human rabies incidence depended critically on dog-related transmission parameters. We simulated the effects of mass dog vaccination and the culling of a percentage of the dog population on human rabies incidence. A single parenteral dog rabies-mass vaccination campaign achieving a coverage of least 70% appears to be sufficient to interrupt transmission of rabies to humans for at least 6 years. The cost-effectiveness of mass dog vaccination was compared to postexposure prophylaxis (PEP), which is the current practice in Chad. PEP does not reduce future human exposure. Its cost-effectiveness is estimated at US $46 per disability adjusted life-years averted. Cost-effectiveness for PEP, together with a dog-vaccination campaign, breaks even with cost-effectiveness of PEP alone after almost 5 years. Beyond a time-frame of 7 years, it appears to be more cost-effective to combine parenteral dog-vaccination campaigns with human PEP compared to human PEP alone.
Pingali, Sai Ravi; Jewell, Sarah W; Havlat, Luiza; Bast, Martin A; Thompson, Jonathan R; Eastwood, Daniel C; Bartlett, Nancy L; Armitage, James O; Wagner-Johnston, Nina D; Vose, Julie M; Fenske, Timothy S
2014-07-15
The objective of this study was to compare the outcomes of patients with classical Hodgkin lymphoma (cHL) who achieved complete remission with frontline therapy and then underwent either clinical surveillance or routine surveillance imaging. In total, 241 patients who were newly diagnosed with cHL between January 2000 and December 2010 at 3 participating tertiary care centers and achieved complete remission after first-line therapy were retrospectively analyzed. Of these, there were 174 patients in the routine surveillance imaging group and 67 patients in the clinical surveillance group, based on the intended mode of surveillance. In the routine surveillance imaging group, the intended plan of surveillance included computed tomography and/or positron emission tomography scans; whereas, in the clinical surveillance group, the intended plan of surveillance was clinical examination and laboratory studies, and scans were obtained only to evaluate concerning signs or symptoms. Baseline patient characteristics, prognostic features, treatment records, and outcomes were collected. The primary objective was to compare overall survival for patients in both groups. For secondary objectives, we compared the success of second-line therapy and estimated the costs of imaging for each group. After 5 years of follow-up, the overall survival rate was 97% (95% confidence interval, 92%-99%) in the routine surveillance imaging group and 96% (95% confidence interval, 87%-99%) in the clinical surveillance group (P = .41). There were few relapses in each group, and all patients who relapsed in both groups achieved complete remission with second-line therapy. The charges associated with routine surveillance imaging were significantly higher than those for the clinical surveillance strategy, with no apparent clinical benefit. Clinical surveillance was not inferior to routine surveillance imaging in patients with cHL who achieved complete remission with frontline therapy. Routine surveillance imaging was associated with significantly increased estimated imaging charges. © 2014 American Cancer Society.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Vukicevic, Arso M; Jovicic, Gordana R; Jovicic, Milos N; Milicevic, Vladimir L; Filipovic, Nenad D
2018-02-01
Bone injures (BI) represents one of the major health problems, together with cancer and cardiovascular diseases. Assessment of the risks associated with BI is nontrivial since fragility of human cortical bone is varying with age. Due to restrictions for performing experiments on humans, only a limited number of fracture resistance curves (R-curves) for particular ages have been reported in the literature. This study proposes a novel decision support system for the assessment of bone fracture resistance by fusing various artificial intelligence algorithms. The aim was to estimate the R-curve slope, toughness threshold and stress intensity factor using the two input parameters commonly available during a routine clinical examination: patients age and crack length. Using the data from the literature, the evolutionary assembled Artificial Neural Network was developed and used for the derivation of Linear regression (LR) models of R-curves for arbitrary age. Finally, by using the patient (age)-specific LR models and diagnosed crack size one could estimate the risk of bone fracture under given physiological conditions. Compared to the literature, we demonstrated improved performances for estimating nonlinear changes of R-curve slope (R 2 = 0.82 vs. R 2 = 0.76) and Toughness threshold with ageing (R 2 = 0.73 vs. R 2 = 0.66).
UV-VIS absorption spectroscopy: Lambert-Beer reloaded.
Mäntele, Werner; Deniz, Erhan
2017-02-15
UV-VIS absorption spectroscopy is used in almost every spectroscopy laboratory for routine analysis or research. All spectroscopists rely on the Lambert-Beer Law but many of them are less aware of its limitations. This tutorial discusses typical problems in routine spectroscopy that come along with technical limitations or careless selection of experimental parameters. Simple rules are provided to avoid these problems. Copyright © 2016 Elsevier B.V. All rights reserved.
Rial-Crestelo, M; Martinez-Portilla, R J; Cancemi, A; Caradeux, J; Fernandez, L; Peguero, A; Gratacos, E; Figueras, Francesc
2018-03-04
The objective of this study is to determine the added value of cerebroplacental ratio (CPR) and uterine Doppler velocimetry at third trimester scan in an unselected obstetric population to predict smallness and growth restriction. We constructed a prospective cohort study of women with singleton pregnancies attended for routine third trimester screening (32 +0 -34 +6 weeks). Fetal biometry and fetal-maternal Doppler ultrasound examinations were performed by certified sonographers. The CPR was calculated as a ratio of the middle cerebral artery to the umbilical artery pulsatility indices. Both attending professionals and patients were blinded to the results, except in cases of estimated fetal weight < p10. The association between third trimester Doppler parameters and small for gestational age (SGA) (birth weight <10th centile) and fetal growth restriction (FGR) (birth weight below the third centile) was assessed by logistic regression, where the basal comparison was a model comprising maternal characteristics and estimated fetal weight (EFW). A total of 1030 pregnancies were included. The mean gestational age at scan was 33 weeks (SD 0.6). The addition of CPR and uterine Doppler to maternal characteristics plus EFW improved the explained uncertainty of the predicting models for SGA (15 versus 10%, p < .001) and FGR (12 versus 8%, p = .03). However, the addition of CPR and uterine Doppler to maternal characteristics plus EFW only marginally improved the detection rates for SGA (38 versus 34% for a 10% of false positives) and did not change the predictive performance for FGR. The added value of CPR and uterine Doppler at 33 weeks of gestation for detecting defective growth is poor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, M; Whitlow, C; Jung, Y
Purpose: To demonstrate the feasibility of a novel Arterial Spin Labeling (ASL) method for simultaneously measuring cerebral blood flow (CBF), arterial transit time (ATT), and arterial cerebral blood volume (aCBV) without the use of a contrast agent. Methods: A series of multi-TI ASL images were acquired from one healthy subject on a 3T Siemens Skyra, with the following parameters: PCASL labeling with variable TI [300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3500, 4000] ms, labeling bolus 1400 ms when TI allows, otherwise 100 ms less than TI, TR was minimized for each TI, two sincmore » shaped pre-saturation pulses were applied in the imaging plane immediately before 2D EPI acquisition. 64×64×24 voxels, 5 mm slice thickness, 1 mm gap, full brain coverage, 6 averages per TI, no crusher gradients, 11 ms TE, scan time of 4:56. The perfusion weighted time-series was created for each voxel and fit to a novel model. The model has two components: 1) the traditional model developed by Buxton et al., accounting for CBF and ATT, and 2) a box car function characterizing the width of the labeling bolus, with variable timing and height in proportion to the aCBV. All three parameters were fit using a nonlinear fitting routine that constrained all parameters to be positive. The main purpose of the high-temporal resolution TI sampling for the first second of data acquisition was to precisely estimate the blood volume component for better detection of arrival time and magnitude of signal. Results: Whole brain maps of CBF, ATT, and aCBV were produced, and all three parameters maps are consistent with similar maps described in the literature. Conclusion: Simultaneous mapping of CBF, ATT, and aCBV is feasible with a clinically tractable scan time (under 5 minutes).« less
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured the mean bacteria transport behavior and calculated an envelope of uncertainty that bracketed the observations in most simulation cases. ?? 2007 American Chemical Society.
Sadhu, Abhishek; Bhadra, Sreetama; Bandyopadhyay, Maumita
2016-01-01
Background and Aims Cytological parameters such as chromosome numbers and genome sizes of plants are used routinely for studying evolutionary aspects of polyploid plants. Members of Zingiberaceae show a wide range of inter- and intrageneric variation in their reproductive habits and ploidy levels. Conventional cytological study in this group of plants is severely hampered by the presence of diverse secondary metabolites, which also affect their genome size estimation using flow cytometry. None of the several nuclei isolation buffers used in flow cytometry could be used very successfully for members of Zingiberaceae to isolate good quality nuclei from both shoot and root tissues. Methods The competency of eight nuclei isolation buffers was compared with a newly formulated buffer, MB01, in six different genera of Zingiberaceae based on the fluorescence intensity of propidium iodide-stained nuclei using flow cytometric parameters, namely coefficient of variation of the G0/G1 peak, debris factor and nuclei yield factor. Isolated nuclei were studied using fluorescence microscopy and bio-scanning electron microscopy to analyse stain–nuclei interaction and nuclei topology, respectively. Genome contents of 21 species belonging to these six genera were determined using MB01. Key Results Flow cytometric parameters showed significant differences among the analysed buffers. MB01 exhibited the best combination of analysed parameters; photomicrographs obtained from fluorescence and electron microscopy supported the superiority of MB01 buffer over other buffers. Among the 21 species studied, nuclear DNA contents of 14 species are reported for the first time. Conclusions Results of the present study substantiate the enhanced efficacy of MB01, compared to other buffers tested, in the generation of acceptable cytograms from all species of Zingiberaceae studied. Our study facilitates new ways of sample preparation for further flow cytometric analysis of genome size of other members belonging to this highly complex polyploid family. PMID:27594649
Sadhu, Abhishek; Bhadra, Sreetama; Bandyopadhyay, Maumita
2016-11-01
Cytological parameters such as chromosome numbers and genome sizes of plants are used routinely for studying evolutionary aspects of polyploid plants. Members of Zingiberaceae show a wide range of inter- and intrageneric variation in their reproductive habits and ploidy levels. Conventional cytological study in this group of plants is severely hampered by the presence of diverse secondary metabolites, which also affect their genome size estimation using flow cytometry. None of the several nuclei isolation buffers used in flow cytometry could be used very successfully for members of Zingiberaceae to isolate good quality nuclei from both shoot and root tissues. The competency of eight nuclei isolation buffers was compared with a newly formulated buffer, MB01, in six different genera of Zingiberaceae based on the fluorescence intensity of propidium iodide-stained nuclei using flow cytometric parameters, namely coefficient of variation of the G 0 /G 1 peak, debris factor and nuclei yield factor. Isolated nuclei were studied using fluorescence microscopy and bio-scanning electron microscopy to analyse stain-nuclei interaction and nuclei topology, respectively. Genome contents of 21 species belonging to these six genera were determined using MB01. Flow cytometric parameters showed significant differences among the analysed buffers. MB01 exhibited the best combination of analysed parameters; photomicrographs obtained from fluorescence and electron microscopy supported the superiority of MB01 buffer over other buffers. Among the 21 species studied, nuclear DNA contents of 14 species are reported for the first time. Results of the present study substantiate the enhanced efficacy of MB01, compared to other buffers tested, in the generation of acceptable cytograms from all species of Zingiberaceae studied. Our study facilitates new ways of sample preparation for further flow cytometric analysis of genome size of other members belonging to this highly complex polyploid family. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
Benchmarking routine psychological services: a discussion of challenges and methods.
Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick
2014-01-01
Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Carlsson, Kristin Cecilie; Hoem, Nils Ove; Glauser, Tracy; Vinks, Alexander A
2005-05-01
Population models can be important extensions of therapeutic drug monitoring (TDM), as they allow estimation of individual pharmacokinetic parameters based on a small number of measured drug concentrations. This study used a Bayesian approach to explore the utility of routinely collected and sparse TDM data (1 sample per patient) for carbamazepine (CBZ) monotherapy in developing a population pharmacokinetic (PPK) model for CBZ in pediatric patients that would allow prediction of CBZ concentrations for both immediate- and controlled-release formulations. Patient and TDM data were obtained from a pediatric neurology outpatient database. Data were analyzed using an iterative 2-stage Bayesian algorithm and a nonparametric adaptive grid algorithm. Models were compared by final log likelihood, mean error (ME) as a measure of bias, and root mean squared error (RMSE) as a measure of precision. Fifty-seven entries with data on CBZ monotherapy were identified from the database and used in the analysis (36 from males, 21 from females; mean [SD] age, 9.1 [4.4] years [range, 2-21 years]). Preliminary models estimating clearance (Cl) or the elimination rate constant (K(el)) gave good prediction of serum concentrations compared with measured serum concentrations, but estimates of Cl and K(el) were highly correlated with estimates of volume of distribution (V(d)). Different covariate models were then tested. The selected model had zero-order input and had age and body weight as covariates. Cl (L/h) was calculated as K(el) . V(d), where K(el) = [K(i) - (K(s) . age)] and V(d) = [V(i) + (V(s) . body weight)]. Median parameter estimates were V(i) (intercept) = 11.5 L (fixed); V(s) (slope) = 0.3957 L/kg (range, 0.01200-1.5730); K(i) (intercept) = 0.173 h(-1) (fixed); and K(s) (slope) = 0.004487 h(-1) . y(-1) (range, 0.0001800-0.02969). The fit was good for estimates of steady-state serum concentrations based on prior values (population median estimates) (R = 0.468; R(2) = 0.219) but was even better for predictions based on individual Bayesian posterior values (R(2) = 0.991), with little bias (ME = -0.079) and good precision (RMSE = 0.055). Based on the findings of this study, sparse TDM data can be used for PPK modeling of CBZ clearance in children with epilepsy, and these models can be used to predict Cl at steady state in pediatric patients. However, to estimate additional pharmacokinetic model parameters (eg, the absorption rate constant and V(d)), it would be necessary to combine sparse TDM data with additional well-timed samples. This would allow development of more informative PPK models that could be used as part of Bayesian dose-individualization strategies.
García-Pérez, Miguel A.; Alcalá-Quintana, Rocío
2017-01-01
Psychophysical data from dual-presentation tasks are often collected with the two-alternative forced-choice (2AFC) response format, asking observers to guess when uncertain. For an analytical description of performance, psychometric functions are then fitted to data aggregated across the two orders/positions in which stimuli were presented. Yet, order effects make aggregated data uninterpretable, and the bias with which observers guess when uncertain precludes separating sensory from decisional components of performance. A ternary response format in which observers are also allowed to report indecision should fix these problems, but a comparative analysis with the 2AFC format has never been conducted. In addition, fitting ternary data separated by presentation order poses serious challenges. To address these issues, we extended the indecision model of psychophysical performance to accommodate the ternary, 2AFC, and same–different response formats in detection and discrimination tasks. Relevant issues for parameter estimation are also discussed along with simulation results that document the superiority of the ternary format. These advantages are demonstrated by fitting the indecision model to published detection and discrimination data collected with the ternary, 2AFC, or same–different formats, which had been analyzed differently in the sources. These examples also show that 2AFC data are unsuitable for testing certain types of hypotheses. matlab and R routines written for our purposes are available as Supplementary Material, which should help spread the use of the ternary format for dependable collection and interpretation of psychophysical data. PMID:28747893
Baehr, Arthur L.; Corapcioglu, M. Yavuz
1987-01-01
In this paper we develop a numerical solution to equations developed in part 1 (M. Y. Corapcioglu and A. L. Baehr, this issue) to predict the fate of an immiscible organic contaminant such as gasoline in the unsaturated zone subsequent to plume establishment. This solution, obtained by using a finite difference scheme and a method of forward projection to evaluate nonlinear coefficients, provides estimates of the flux of solubilized hydrocarbon constituents to groundwater from the portion of a spill which remains trapped in a soil after routine remedial efforts to recover the product have ceased. The procedure was used to solve the one-dimensional (vertical) form of the system of nonlinear partial differential equations defining the transport for each constituent of the product. Additionally, a homogeneous, isothermal soil with constant water content was assumed. An equilibrium assumption partitions the constituents between air, water, adsorbed, and immiscible phases. Free oxygen transport in the soil was also simulated to provide an upper bound estimate of aerobic biodgradation rates. Results are presented for a hypothetical gasoline consisting of eight groups of hydrocarbon constituents. Rates at which hydrocarbon mass is removed from the soil, entering either the atmosphere or groundwater, or is biodegraded are presented. A significant sensitivity to model parameters, particularly the parameters characterizing diffusive vapor transport, was discovered. We conclude that hydrocarbon solute composition in groundwater beneath a gasoline contaminated soil would be heavily weighted toward aromatic constituents like benzene, toluene, and xylene.
Surface-Source Downhole Seismic Analysis in R
Thompson, Eric M.
2007-01-01
This report discusses a method for interpreting a layered slowness or velocity model from surface-source downhole seismic data originally presented by Boore (2003). I have implemented this method in the statistical computing language R (R Development Core Team, 2007), so that it is freely and easily available to researchers and practitioners that may find it useful. I originally applied an early version of these routines to seismic cone penetration test data (SCPT) to analyze the horizontal variability of shear-wave velocity within the sediments in the San Francisco Bay area (Thompson et al., 2006). A more recent version of these codes was used to analyze the influence of interface-selection and model assumptions on velocity/slowness estimates and the resulting differences in site amplification (Boore and Thompson, 2007). The R environment has many benefits for scientific and statistical computation; I have chosen R to disseminate these routines because it is versatile enough to program specialized routines, is highly interactive which aids in the analysis of data, and is freely and conveniently available to install on a wide variety of computer platforms. These scripts are useful for the interpretation of layered velocity models from surface-source downhole seismic data such as deep boreholes and SCPT data. The inputs are the travel-time data and the offset of the source at the surface. The travel-time arrivals for the P- and S-waves must already be picked from the original data. An option in the inversion is to include estimates of the standard deviation of the travel-time picks for a weighted inversion of the velocity profile. The standard deviation of each travel-time pick is defined relative to the standard deviation of the best pick in a profile and is based on the accuracy with which the travel-time measurement could be determined from the seismogram. The analysis of the travel-time data consists of two parts: the identification of layer-interfaces, and the inversion for the velocity of each layer. The analyst usually picks layer-interfaces by visual inspection of the travel-time data. I have also developed an algorithm that automatically finds boundaries which can save a significant amount of the time when analyzing a large number of sites. The results of the automatic routines should be reviewed to check that they are reasonable. The interactivity of these scripts allows the user to add and to remove layers quickly, thus allowing rapid feedback on how the residuals are affected by each additional parameter in the inversion. In addition, the script allows many models to be compared at the same time.
Extracting Maximum Total Water Levels from Video "Brightest" Images
NASA Astrophysics Data System (ADS)
Brown, J. A.; Holman, R. A.; Stockdon, H. F.; Plant, N. G.; Long, J.; Brodie, K.
2016-02-01
An important parameter for predicting storm-induced coastal change is the maximum total water level (TWL). Most studies estimate the TWL as the sum of slowly varying water levels, including tides and storm surge, and the extreme runup parameter R2%, which includes wave setup and swash motions over minutes to seconds. Typically, R2% is measured using video remote sensing data, where cross-shore timestacks of pixel intensity are digitized to extract the horizontal runup timeseries. However, this technique must be repeated at multiple alongshore locations to resolve alongshore variability, and can be tedious and time consuming. We seek an efficient, video-based approach that yields a synoptic estimate of TWL that accounts for alongshore variability and can be applied during storms. In this work, the use of a video product termed the "brightest" image is tested; this represents the highest intensity of each pixel captured during a 10-minute collection period. Image filtering and edge detection techniques are applied to automatically determine the shoreward edge of the brightest region (i.e., the swash zone) at each alongshore pixel. The edge represents the horizontal position of the maximum TWL along the beach during the collection period, and is converted to vertical elevations using measured beach topography. This technique is evaluated using video and topographic data collected every half-hour at Duck, NC, during differing hydrodynamic conditions. Relationships between the maximum TWL estimates from the brightest images and various runup statistics computed using concurrent runup timestacks are examined, and errors associated with mapping the horizontal results to elevations are discussed. This technique is invaluable, as it can be used to routinely estimate maximum TWLs along a coastline from a single brightest image product, and provides a means for examining alongshore variability of TWLs at high alongshore resolution. These advantages will be useful in validating numerical hydrodynamic models and improving coastal change predictions.
Ray Next-Event Estimator Transport of Primary and Secondary Gamma Rays
2011-03-01
McGraw-Hill. Choppin, G. R., Liljenzin, J.-O., & Rydberg, J. (2002). Radiochemistry and Nuclear Chemistry (3rd ed.). Woburn, MA: Butterworth- Heinemann ...time-energy bins. Any performance enhancements (maybe parallel searching?) to the search routines decrease estimator computational time
Optimization of Equation of State and Burn Model Parameters for Explosives
NASA Astrophysics Data System (ADS)
Bergh, Magnus; Wedberg, Rasmus; Lundgren, Jonas
2017-06-01
A reactive burn model implemented in a multi-dimensional hydrocode can be a powerful tool for predicting non-ideal effects as well as initiation phenomena in explosives. Calibration against experiment is, however, critical and non-trivial. Here, a procedure is presented for calibrating the Ignition and Growth Model utilizing hydrocode simulation in conjunction with the optimization program LS-OPT. The model is applied to the explosive PBXN-109. First, a cylinder expansion test is presented together with a new automatic routine for product equation of state calibration. Secondly, rate stick tests and instrumented gap tests are presented. Data from these experiments are used to calibrate burn model parameters. Finally, we discuss the applicability and development of this optimization routine.
Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects
NASA Astrophysics Data System (ADS)
Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.
2013-08-01
In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.
Lipemia interferences in routine clinical biochemical tests.
Calmarza, Pilar; Cordero, José
2011-01-01
Lipemic specimens are a common and frequent, but yet unresolved problem in clinical chemistry, and may produce significant interferences in the analytical results of different biochemical parameters. The aim of this study was to examine the effect of lipid removal using ultracentrifugation of lipemic samples, on some routine biochemistry parameters. Among all the samples obtained daily in our laboratory, the ones which were visibly muddy were selected and underwent to a process of ultracentrifugation, being determined a variety of biochemical tests before and after ultracentrifugation. A total of 110 samples were studied. We found significant differences in all the parameters studied except for total bilirubin, glucose, gamma-glutamyl transferase (GGT) and aspartate aminotransferase (AST). The greatest differences in the parameters analyzed were found in the concentration of alanine aminotransferase (ALT) (7.36%) and the smallest ones in the concentration of glucose (0.014%). Clinically significant interferences were found for phosphorus, creatinine, total protein and calcium. Lipemia causes clinically significant interferences for phosphorus, creatinine, total protein and calcium measurement and those interferences could be effectively removed by ultracentrifugation.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
An examination of sources of sensitivity of consumer surplus estimates in travel cost models.
Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E
2015-03-15
We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.
McGavran, P D; Rood, A S; Till, J E
1999-01-01
Beryllium was released into the air from routine operations and three accidental fires at the Rocky Flats Plant (RFP) in Colorado from 1958 to 1989. We evaluated environmental monitoring data and developed estimates of airborne concentrations and their uncertainties and calculated lifetime cancer risks and risks of chronic beryllium disease to hypothetical receptors. This article discusses exposure-response relationships for lung cancer and chronic beryllium disease. We assigned a distribution to cancer slope factor values based on the relative risk estimates from an occupational epidemiologic study used by the U.S. Environmental Protection Agency (EPA) to determine the slope factors. We used the regional atmospheric transport code for Hanford emission tracking atmospheric transport model for exposure calculations because it is particularly well suited for long-term annual-average dispersion estimates and it incorporates spatially varying meteorologic and environmental parameters. We accounted for model prediction uncertainty by using several multiplicative stochastic correction factors that accounted for uncertainty in the dispersion estimate, the meteorology, deposition, and plume depletion. We used Monte Carlo techniques to propagate model prediction uncertainty through to the final risk calculations. We developed nine exposure scenarios of hypothetical but typical residents of the RFP area to consider the lifestyle, time spent outdoors, location, age, and sex of people who may have been exposed. We determined geometric mean incremental lifetime cancer incidence risk estimates for beryllium inhalation for each scenario. The risk estimates were < 10(-6). Predicted air concentrations were well below the current reference concentration derived by the EPA for beryllium sensitization. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 PMID:10464074
Predicting gestational age using neonatal metabolic markers
Ryckman, Kelli K.; Berberich, Stanton L.; Dagle, John M.
2016-01-01
Background Accurate gestational age estimation is extremely important for clinical care decisions of the newborn as well as for perinatal health research. Although prenatal ultrasound dating is one of the most accurate methods for estimating gestational age, it is not feasible in all settings. Identifying novel and accurate methods for gestational age estimation at birth is important, particularly for surveillance of preterm birth rates in areas without routine ultrasound dating. Objective We hypothesized that metabolic and endocrine markers captured by routine newborn screening could improve gestational age estimation in the absence of prenatal ultrasound technology. Study Design This is a retrospective analysis of 230,013 newborn metabolic screening records collected by the Iowa Newborn Screening Program between 2004 and 2009. The data were randomly split into a model-building dataset (n = 153,342) and a model-testing dataset (n = 76,671). We performed multiple linear regression modeling with gestational age, in weeks, as the outcome measure. We examined 44 metabolites, including biomarkers of amino acid and fatty acid metabolism, thyroid-stimulating hormone, and 17-hydroxyprogesterone. The coefficient of determination (R2) and the root-mean-square error were used to evaluate models in the model-building dataset that were then tested in the model-testing dataset. Results The newborn metabolic regression model consisted of 88 parameters, including the intercept, 37 metabolite measures, 29 squared metabolite measures, and 21 cubed metabolite measures. This model explained 52.8% of the variation in gestational age in the model-testing dataset. Gestational age was predicted within 1 week for 78% of the individuals and within 2 weeks of gestation for 95% of the individuals. This model yielded an area under the curve of 0.899 (95% confidence interval 0.895−0.903) in differentiating those born preterm (<37 weeks) from those born term (≥37 weeks). In the subset of infants born small-for-gestational age, the average difference between gestational ages predicted by the newborn metabolic model and the recorded gestational age was 1.5 weeks. In contrast, the average difference between gestational ages predicted by the model including only newborn weight and the recorded gestational age was 1.9 weeks. The estimated prevalence of preterm birth <37 weeks’ gestation in the subset of infants that were small for gestational age was 18.79% when the model including only newborn weight was used, over twice that of the actual prevalence of 9.20%. The newborn metabolic model underestimated the preterm birth prevalence at 6.94% but was closer to the prevalence based on the recorded gestational age than the model including only newborn weight. Conclusions The newborn metabolic profile, as derived from routine newborn screening markers, is an accurate method for estimating gestational age. In small-for-gestational age neonates, the newborn metabolic model predicts gestational age to a better degree than newborn weight alone. Newborn metabolic screening is a potentially effective method for population surveillance of preterm birth in the absence of prenatal ultrasound measurements or newborn weight. PMID:26645954
Predicting gestational age using neonatal metabolic markers.
Ryckman, Kelli K; Berberich, Stanton L; Dagle, John M
2016-04-01
Accurate gestational age estimation is extremely important for clinical care decisions of the newborn as well as for perinatal health research. Although prenatal ultrasound dating is one of the most accurate methods for estimating gestational age, it is not feasible in all settings. Identifying novel and accurate methods for gestational age estimation at birth is important, particularly for surveillance of preterm birth rates in areas without routine ultrasound dating. We hypothesized that metabolic and endocrine markers captured by routine newborn screening could improve gestational age estimation in the absence of prenatal ultrasound technology. This is a retrospective analysis of 230,013 newborn metabolic screening records collected by the Iowa Newborn Screening Program between 2004 and 2009. The data were randomly split into a model-building dataset (n = 153,342) and a model-testing dataset (n = 76,671). We performed multiple linear regression modeling with gestational age, in weeks, as the outcome measure. We examined 44 metabolites, including biomarkers of amino acid and fatty acid metabolism, thyroid-stimulating hormone, and 17-hydroxyprogesterone. The coefficient of determination (R(2)) and the root-mean-square error were used to evaluate models in the model-building dataset that were then tested in the model-testing dataset. The newborn metabolic regression model consisted of 88 parameters, including the intercept, 37 metabolite measures, 29 squared metabolite measures, and 21 cubed metabolite measures. This model explained 52.8% of the variation in gestational age in the model-testing dataset. Gestational age was predicted within 1 week for 78% of the individuals and within 2 weeks of gestation for 95% of the individuals. This model yielded an area under the curve of 0.899 (95% confidence interval 0.895-0.903) in differentiating those born preterm (<37 weeks) from those born term (≥37 weeks). In the subset of infants born small-for-gestational age, the average difference between gestational ages predicted by the newborn metabolic model and the recorded gestational age was 1.5 weeks. In contrast, the average difference between gestational ages predicted by the model including only newborn weight and the recorded gestational age was 1.9 weeks. The estimated prevalence of preterm birth <37 weeks' gestation in the subset of infants that were small for gestational age was 18.79% when the model including only newborn weight was used, over twice that of the actual prevalence of 9.20%. The newborn metabolic model underestimated the preterm birth prevalence at 6.94% but was closer to the prevalence based on the recorded gestational age than the model including only newborn weight. The newborn metabolic profile, as derived from routine newborn screening markers, is an accurate method for estimating gestational age. In small-for-gestational age neonates, the newborn metabolic model predicts gestational age to a better degree than newborn weight alone. Newborn metabolic screening is a potentially effective method for population surveillance of preterm birth in the absence of prenatal ultrasound measurements or newborn weight. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakariaee, R; Brown, C J; Hamarneh, G
2014-08-15
Dosimetric parameters based on dose-volume histograms (DVH) of contoured structures are routinely used to evaluate dose delivered to target structures and organs at risk. However, the DVH provides no information on the spatial distribution of the dose in situations of repeated fractions with changes in organ shape or size. The aim of this research was to develop methods to more accurately determine geometrically localized, cumulative dose to the bladder wall in intracavitary brachytherapy for cervical cancer. The CT scans and treatment plans of 20 cervical cancer patients were used. Each patient was treated with five high-dose-rate (HDR) brachytherapy fractions ofmore » 600cGy prescribed dose. The bladder inner and outer surfaces were delineated using MIM Maestro software (MIM Software Inc.) and were imported into MATLAB (MathWorks) as 3-dimensional point clouds constituting the “bladder wall”. A point-set registration toolbox for MATLAB, Coherent Point Drift (CPD), was used to non-rigidly transform the bladder-wall points from four of the fractions to the coordinate system of the remaining (reference) fraction, which was chosen to be the emptiest bladder for each patient. The doses were accumulated on the reference fraction and new cumulative dosimetric parameters were calculated. The LENT-SOMA toxicity scores of these patients were studied against the cumulative dose parameters. Based on this study, there was no significant correlation between the toxicity scores and the determined cumulative dose parameters.« less
Le Huec, Jean Charles; Hasegawa, Kazuhiro
2016-11-01
Sagittal balance analysis has gained importance and the measure of the radiographic spinopelvic parameters is now a routine part of many interventions of spine surgery. Indeed, surgical correction of lumbar lordosis must be proportional to the pelvic incidence (PI). The compensatory mechanisms [pelvic retroversion with increased pelvic tilt (PT) and decreased thoracic kyphosis] spontaneously reverse after successful surgery. This study is the first to provide 3D standing spinopelvic reference values from a large database of Caucasian (n = 137) and Japanese (n = 131) asymptomatic subjects. The key spinopelvic parameters [e.g., PI, PT, sacral slope (SS)] were comparable in Japanese and Caucasian populations. Three equations, namely lumbar lordosis based on PI, PT based on PI and SS based on PI, were calculated after linear regression modeling and were comparable in both populations: lumbar lordosis (L1-S1) = 0.54*PI + 27.6, PT = 0.44*PI - 11.4 and SS = 0.54*PI + 11.90. We showed that the key spinopelvic parameters obtained from a large database of healthy subjects were comparable for Causasian and Japanese populations. The normative values provided in this study and the equations obtained after linear regression modeling could help to estimate pre-operatively the lumbar lordosis restoration and could be also used as guidelines for spinopelvic sagittal balance.
An empirical approach to inversion of an unconventional helicopter electromagnetic dataset
Pellerin, L.; Labson, V.F.
2003-01-01
A helicopter electromagnetic (HEM) survey acquired at the U.S. Idaho National Engineering and Environmental Laboratory (INEEL) used a modification of a traditional mining airborne method flown at low levels for detailed characterization of shallow waste sites. The low sensor height, used to increase resolution, invalidates standard assumptions used in processing HEM data. Although the survey design strategy was sound, traditional interpretation techniques, routinely used in industry, proved ineffective. Processed data and apparent resistivity maps were severely distorted, and hence unusable, due to low flight height effects, high magnetic permeability of the basalt host, and the conductive, three-dimensional nature of the waste site targets.To accommodate these interpretation challenges, we modified a one-dimensional inversion routine to include a linear term in the objective function that allows for the magnetic and three-dimensional electromagnetic responses in the in-phase data. Although somewhat ad hoc, the use of this term in the inverse routine, referred to as the shift factor, was successful in defining the waste sites and reducing noise due to the low flight height and magnetic characteristics of the host rock. Many inversion scenarios were applied to the data and careful analysis was necessary to determine the parameters appropriate for interpretation, hence the approach was empirical. Data from three areas were processed with this scheme to highlight different interpretational aspects of the method. Wastes sites were delineated with the shift terms in two of the areas, allowing for separation of the anthropomorphic targets from the natural one-dimensional host. In the third area, the estimated resistivity and the shift factor were used for geological mapping. The high magnetic content of the native soil enabled the mapping of disturbed soil with the shift term. Published by Elsevier Science B.V.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.
DISAGGREGATION OF GOES LAND SURFACE TEMPERATURES USING SURFACE EMISSIVITY
USDA-ARS?s Scientific Manuscript database
Accurate temporal and spatial estimation of land surface temperatures (LST) is important for modeling the hydrological cycle at field to global scales because LSTs can improve estimates of soil moisture and evapotranspiration. Using remote sensing satellites, accurate LSTs could be routine, but unfo...
Estimating the coverage of mental health programmes: a systematic review.
De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram
2014-04-01
The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys.
Locating and Modeling Regional Earthquakes with Broadband Waveform Data
NASA Astrophysics Data System (ADS)
Tan, Y.; Zhu, L.; Helmberger, D.
2003-12-01
Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and focal mechanism. We show that the whole process can be easily automated and routinely provide reliable source parameter estimates with a couple of broadband stations. Two applications in the Tibet Plateau and Southern California will be presented along with comparisons of results against other methods.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
User's Manual: Routines for Radiative Heat Transfer and Thermometry
NASA Technical Reports Server (NTRS)
Risch, Timothy K.
2016-01-01
Determining the intensity and spectral distribution of radiation emanating from a heated surface has applications in many areas of science and engineering. Areas of research in which the quantification of spectral radiation is used routinely include thermal radiation heat transfer, infrared signature analysis, and radiation thermometry. In the analysis of radiation, it is helpful to be able to predict the radiative intensity and the spectral distribution of the emitted energy. Presented in this report is a set of routines written in Microsoft Visual Basic for Applications (VBA) (Microsoft Corporation, Redmond, Washington) and incorporating functions specific to Microsoft Excel (Microsoft Corporation, Redmond, Washington) that are useful for predicting the radiative behavior of heated surfaces. These routines include functions for calculating quantities of primary importance to engineers and scientists. In addition, the routines also provide the capability to use such information to determine surface temperatures from spectral intensities and for calculating the sensitivity of the surface temperature measurements to unknowns in the input parameters.
NASA Technical Reports Server (NTRS)
Sease, Brad
2017-01-01
The Wide Field Infrared Survey Telescope is a 2.4-meter telescope planned for launch to the Sun-Earth L2 point in 2026. This paper details a preliminary study of the achievable accuracy for WFIRST from ground-based orbit determination routines. The analysis here is divided into two segments. First, a linear covariance analysis of early mission and routine operations provides an estimate of the tracking schedule required to meet mission requirements. Second, a simulated operations scenario gives insight into the expected behavior of a daily Extended Kalman Filter orbit estimate over the first mission year given a variety of potential momentum unloading schemes.
NASA Technical Reports Server (NTRS)
Sease, Bradley; Myers, Jessica; Lorah, John; Webster, Cassandra
2017-01-01
The Wide Field Infrared Survey Telescope is a 2.4-meter telescope planned for launch to the Sun-Earth L2 point in 2026. This paper details a preliminary study of the achievable accuracy for WFIRST from ground-based orbit determination routines. The analysis here is divided into two segments. First, a linear covariance analysis of early mission and routine operations provides an estimate of the tracking schedule required to meet mission requirements. Second, a simulated operations'' scenario gives insight into the expected behavior of a daily Extended Kalman Filter orbit estimate over the first mission year given a variety of potential momentum unloading schemes.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Routine health monitoring in an aquatic species (Oryzias latipes) used in toxicological testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Twerdok, L.E.; Beaman, J.R.; Curry, M.W.
1995-12-31
It is critical to establish baseline health endpoints in animal models used in toxicological studies. In mammalian models, procedures for monitoring the health status of test animals have been established and in use for many years; in many aquatic models, including medaka, much of this routine health screening has not been documented. Thus, the purpose of this study was to characterize routine health parameters in medaka and to identify parameters sensitive to changes in health status which could affect the suitability of animals for use in general toxicity and immunotoxicological studies. The endpoints assessed included histopathology (31 organs), identification ofmore » endogenous bacterial flora and, gross necropsy including body weight, length, hematocrit, leukocrit, and plasma immunoglobulin levels. Additional parameters included anterior kidney (the teleost bone marrow equivalent) weight and cell yields plus superoxide anion production. Histological findings included observation of age-related incidence of granulomatous lesions in a variety of organs. Multiple strains of Aeromonas and Pseudomonas were the predominant internal flora in healthy medaka. Hematocrit, leukocrit and plasma IgM levels were within the normal range for this species. Comparisons were made between healthy and handling-stressed fish. Evaluation of data collected to date suggest that leukocrit and superoxide anion production were the most sensitive indicators of the fish health status and suitability for use in general and/or immunotoxicological studies.« less
Subsite mapping of enzymes. Depolymerase computer modelling.
Allen, J D; Thoma, J A
1976-01-01
We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629
Pillarisetti, Ajay; Allen, Tracy; Ruiz-Mercado, Ilse; Edwards, Rufus; Chowdhury, Zohir; Garland, Charity; Johnson, Michael; Litton, Charles D.; Lam, Nicholas L.; Pennise, David; Smith, Kirk R.
2017-01-01
Over the last 20 years, the Kirk R. Smith research group at the University of California Berkeley—in collaboration with Electronically Monitored Ecosystems, Berkeley Air Monitoring Group, and other academic institutions—has developed a suite of relatively inexpensive, rugged, battery-operated, microchip-based devices to quantify parameters related to household air pollution. These devices include two generations of particle monitors; data-logging temperature sensors to assess time of use of household energy devices; a time-activity monitoring system using ultrasound; and a CO2-based tracer-decay system to assess ventilation rates. Development of each system involved numerous iterations of custom hardware, software, and data processing and visualization routines along with both lab and field validation. The devices have been used in hundreds of studies globally and have greatly enhanced our understanding of heterogeneous household air pollution (HAP) concentrations and exposures and factors influencing them. PMID:28812989
Pillarisetti, Ajay; Allen, Tracy; Ruiz-Mercado, Ilse; Edwards, Rufus; Chowdhury, Zohir; Garland, Charity; Hill, L Drew; Johnson, Michael; Litton, Charles D; Lam, Nicholas L; Pennise, David; Smith, Kirk R
2017-08-16
Over the last 20 years, the Kirk R. Smith research group at the University of California Berkeley-in collaboration with Electronically Monitored Ecosystems, Berkeley Air Monitoring Group, and other academic institutions-has developed a suite of relatively inexpensive, rugged, battery-operated, microchip-based devices to quantify parameters related to household air pollution. These devices include two generations of particle monitors; data-logging temperature sensors to assess time of use of household energy devices; a time-activity monitoring system using ultrasound; and a CO₂-based tracer-decay system to assess ventilation rates. Development of each system involved numerous iterations of custom hardware, software, and data processing and visualization routines along with both lab and field validation. The devices have been used in hundreds of studies globally and have greatly enhanced our understanding of heterogeneous household air pollution (HAP) concentrations and exposures and factors influencing them.
VLBI height corrections due to gravitational deformation of antenna structures
NASA Astrophysics Data System (ADS)
Sarti, P.; Negusini, M.; Abbondanza, C.; Petrov, L.
2009-12-01
From an analysis of regional European VLBI data we evaluate the impact of a VLBI signal path correction model developed to account for gravitational deformations of the antenna structures. The model was derived from a combination of terrestrial surveying methods applied to telescopes at Medicina and Noto in Italy. We find that the model corrections shift the derived height components of these VLBI telescopes' reference points downward by 14.5 and 12.2 mm, respectively. No other parameter estimates nor other station positions are affected. Such systematic height errors are much larger than the formal VLBI random errors and imply the possibility of significant VLBI frame scale distortions, of major concern for the International Terrestrial Reference Frame (ITRF) and its applications. This demonstrates the urgent need to investigate gravitational deformations in other VLBI telescopes and eventually correct them in routine data analysis.
Height biases and scale variations in VLBI networks due to antenna gravitational deformations
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Sarti, Pierguido; Petrov, Leonid; Negusini, Monia
2010-05-01
The impact of signal path variations (SPVs) caused by antenna gravity deformations on geodetic VLBI results is evaluated for the first time. Elevation-dependent models of SPV for Medicina and Noto (Italy) telescopes were derived from a combination of terrestrial surveying methods to account for gravitational deformations. After applying these models, estimates of the antenna reference point (ARP) positions are shifted upward by 8.9 mm and 6.7 mm, respectively. The impact on other parameters is negligible. To infer the impact of antenna gravity deformations on the entire VLBI network, lacking measurements for other telescopes, we rescaled the SPV models of Medicina and Noto for other antennas according to their size. The effects are changes in VLBI heights in the range [-3,73] mm and a significant net scale increase of 0.3 - 0.8 ppb. This demonstrates the need to include SPV models in routine VLBI data analysis.
On the use of Gaia magnitudes and new tables of bolometric corrections
NASA Astrophysics Data System (ADS)
Casagrande, L.; VandenBerg, Don A.
2018-06-01
The availability of reliable bolometric corrections and reddening estimates, rather than the quality of parallaxes will be one of the main limiting factors in determining the luminosities of a large fraction of Gaia stars. With this goal in mind, we provide Gaia GBP, G, and GRP synthetic photometry for the entire MARCS grid, and test the performance of our synthetic colours and bolometric corrections against space-borne absolute spectrophotometry. We find indication of a magnitude-dependent offset in Gaia DR2 G magnitudes, which must be taken into account in high accuracy investigations. Our interpolation routines are easily used to derive bolometric corrections at desired stellar parameters, and to explore the dependence of Gaia photometry on Teff, log g, {[Fe/H]}, [α /{Fe}] and E(B - V). Gaia colours for the Sun and Vega, and Teff-dependent extinction coefficients, are also provided.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Nunamaker, T R
1983-01-01
This article provides an illustrative application of Data Envelopment Analysis (DEA) methodology to the measurement of routine nursing service efficiency at a group of Wisconsin hospitals. The DEA efficiency ratings and cost savings estimates are then compared to those resulting from application of Medicare's routine cost limitation to the sample data. DEA is also used to determine if any changes in the potential for efficient operations occurred during the 1978-1979 period. Empirical results were representative of the fundamental differences existing between the DEA and cost per patient day approaches. No evidence was found to support the notion that the overall potential for efficient delivery of routine services by the sample institutions was greater in one year than another. PMID:6874357
Nair, Shalini; Nair, Bijesh Ravindran; Vidyasagar, Ajay; Joseph, Mathew
2016-08-01
The routine management of coagulopathy during surgery involves assessing haemoglobin, prothrombin time (PT), activated partial thromboplastin time (aPTT) and platelets. Correction of these parameters involves administration of blood, fresh frozen plasma and platelet concentrates. The study was aimed at identifying the most common coagulation abnormality during neurosurgical procedures and the treatment of dilutional coagulopathy with blood components. During 2 years period, all adult patients undergoing neurosurgical procedures who were transfused two or more units of red cells were prospectively evaluated for the presence of a coagulopathy. PT, aPTT, platelet count and fibrinogen levels were estimated before starting a component therapy. After assessing PT, aPTT, platelet count and fibrinogen levels following two or more blood transfusions, thirty patients were found to have at least one abnormal parameter that required administration of a blood product. The most common abnormality was a low fibrinogen level, seen in 26 patients; this was the only abnormality in three patients. No patient was found to have an abnormal PT or aPTT without either the fibrinogen concentration or platelet count or both being low. Low fibrinogen concentration was the most common coagulation abnormality found after blood transfusions for neurosurgical procedures.
Development of a Physiologically-Based Pharmacokinetic Model of the Rat Central Nervous System
Badhan, Raj K. Singh; Chenel, Marylore; Penny, Jeffrey I.
2014-01-01
Central nervous system (CNS) drug disposition is dictated by a drug’s physicochemical properties and its ability to permeate physiological barriers. The blood–brain barrier (BBB), blood-cerebrospinal fluid barrier and centrally located drug transporter proteins influence drug disposition within the central nervous system. Attainment of adequate brain-to-plasma and cerebrospinal fluid-to-plasma partitioning is important in determining the efficacy of centrally acting therapeutics. We have developed a physiologically-based pharmacokinetic model of the rat CNS which incorporates brain interstitial fluid (ISF), choroidal epithelial and total cerebrospinal fluid (CSF) compartments and accurately predicts CNS pharmacokinetics. The model yielded reasonable predictions of unbound brain-to-plasma partition ratio (Kpuu,brain) and CSF:plasma ratio (CSF:Plasmau) using a series of in vitro permeability and unbound fraction parameters. When using in vitro permeability data obtained from L-mdr1a cells to estimate rat in vivo permeability, the model successfully predicted, to within 4-fold, Kpuu,brain and CSF:Plasmau for 81.5% of compounds simulated. The model presented allows for simultaneous simulation and analysis of both brain biophase and CSF to accurately predict CNS pharmacokinetics from preclinical drug parameters routinely available during discovery and development pathways. PMID:24647103
DOE Office of Scientific and Technical Information (OSTI.GOV)
England, Tony; van Nieuwstadt, Lin; De Roo, Roger
This project, funded by the Department of Energy as DE-EE0005376, successfully measured wind-driven lake ice forces on an offshore structure in Lake Superior through one of the coldest winters in recent history. While offshore regions of the Great Lakes offer promising opportunities for harvesting wind energy, these massive bodies of freshwater also offer extreme and unique challenges. Among these challenges is the need to anticipate forces exerted on offshore structures by lake ice. The parameters of interest include the frequency, extent, and movement of lake ice, parameters that are routinely monitored via satellite, and ice thickness, a parameter that hasmore » been monitored at discrete locations over many years and is routinely modeled. Essential relationships for these data to be of use in the design of offshore structures and the primary objective of this project are measurements of maximum forces that lake ice of known thicknesses might exert on an offshore structure.« less
Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas
2011-01-01
In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.
The Pilates method and cardiorespiratory adaptation to training.
Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen
2016-01-01
Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.
NASA Technical Reports Server (NTRS)
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Willis, Brian H; Hyde, Christopher J
2014-05-01
To determine a plausible estimate for a test's performance in a specific setting using a new method for selecting studies. It is shown how routine data from practice may be used to define an "applicable region" for studies in receiver operating characteristic space. After qualitative appraisal, studies are selected based on the probability that their study accuracy estimates arose from parameters lying in this applicable region. Three methods for calculating these probabilities are developed and used to tailor the selection of studies for meta-analysis. The Pap test applied to the UK National Health Service (NHS) Cervical Screening Programme provides a case example. The meta-analysis for the Pap test included 68 studies, but at most 17 studies were considered applicable to the NHS. For conventional meta-analysis, the sensitivity and specificity (with 95% confidence intervals) were estimated to be 72.8% (65.8, 78.8) and 75.4% (68.1, 81.5) compared with 50.9% (35.8, 66.0) and 98.0% (95.4, 99.1) from tailored meta-analysis using a binomial method for selection. Thus, for a cervical intraepithelial neoplasia (CIN) 1 prevalence of 2.2%, the post-test probability for CIN 1 would increase from 6.2% to 36.6% between the two methods of meta-analysis. Tailored meta-analysis provides a method for augmenting study selection based on the study's applicability to a setting. As such, the summary estimate is more likely to be plausible for a setting and could improve diagnostic prediction in practice. Copyright © 2014 Elsevier Inc. All rights reserved.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Neural network evaluation of tokamak current profiles for real time control
NASA Astrophysics Data System (ADS)
Wróblewski, Dariusz
1997-02-01
Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated.
Neural network evaluation of tokamak current profiles for real time control (abstract)
NASA Astrophysics Data System (ADS)
Wróblewski, Dariusz
1997-01-01
Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated.
Shahzad, Muhammad I; Nichol, Janet E; Wang, Jun; Campbell, James R; Chan, Pak W
2013-09-01
Hong Kong's surface visibility has decreased in recent years due to air pollution from rapid social and economic development in the region. In addition to deteriorating health standards, reduced visibility disrupts routine civil and public operations, most notably transportation and aviation. Regional estimates of visibility solved operationally using available ground and satellite-based estimates of aerosol optical properties and vertical distribution may prove more effective than standard reliance on a few existing surface visibility monitoring stations. Previous studies have demonstrated that such satellite measurements correlate well with near-surface optical properties, despite these sensors do not consider range-resolved information and indirect parameterizations necessary to solve relevant parameters. By expanding such analysis to include vertically resolved aerosol profile information from an autonomous ground-based lidar instrument, this work develops six models for automated assessment of surface visibility. Regional visibility is estimated using co-incident ground-based lidar, sun photometer visibility meter and MODerate-resolution maging Spectroradiometer (MODIS) aerosol optical depth data sets. Using a 355 nm extinction coefficient profile solved from the lidar MODIS AOD (aerosol optical depth) is scaled down to the surface to generate a regional composite depiction of surface visibility. These results demonstrate the potential for applying passive satellite depictions of broad-scale aerosol optical properties together with a ground-based surface lidar and zenith-viewing sun photometer for improving quantitative assessments of visibility in a city such as Hong Kong.
Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies.
Dowd, J E; Weber, I; Rodriguez, B; Piret, J M; Kwok, K E
1999-05-20
The selection of medium feed rates for perfusion bioreactors represents a challenge for process optimization, particularly in bioreactors that are sampled infrequently. When the present and immediate future of a bioprocess can be adequately described, predictive control can minimize deviations from set points in a manner that can maximize process consistency. Predictive control of perfusion hollow-fiber bioreactors was investigated in a series of hybridoma cell cultures that compared operator control to computer estimation of feed rates. Adaptive software routines were developed to estimate the current and predict the future glucose uptake and lactate production of the bioprocess at each sampling interval. The current and future glucose uptake rates were used to select the perfusion feed rate in a designed response to deviations from the set point values. The routines presented a graphical user interface through which the operator was able to view the up-to-date culture performance and assess the model description of the immediate future culture performance. In addition, fewer samples were taken in the computer-estimated cultures, reducing labor and analytical expense. The use of these predictive controller routines and the graphical user interface decreased the glucose and lactate concentration variances up to sevenfold, and antibody yields increased by 10% to 43%. Copyright 1999 John Wiley & Sons, Inc.
The HEASARC graphical user interface
NASA Technical Reports Server (NTRS)
White, N.; Barrett, P.; Jacobs, P.; Oneel, B.
1992-01-01
An OSF/Motif-based graphical user interface has been developed to facilitate the use of the database and data analysis software packages available from the High Energy Astrophysics Science Archive Research Center (HEASARC). It can also be used as an interface to other, similar, routines. A small number of tables are constructed to specify the possible commands and command parameters for a given set of analysis routines. These tables can be modified by a designer to affect the appearance of the interface screens. They can also be dynamically changed in response to parameter adjustments made while the underlying program is running. Additionally, a communication protocol has been designed so that the interface can operate locally or across a network. It is intended that this software be able to run on a variety of workstations and X terminals.
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
Romero, G; Panzalis, R; Ruegg, P
2017-11-01
The aim of this paper was to study the relationship between milk flow emission variables recorded during milking of dairy goats with variables related to milking routine, goat physiology, milking parameters and milking machine characteristics, to determine the variables affecting milking performance and help the goat industry pinpoint farm and milking practices that improve milking performance. In total, 19 farms were visited once during the evening milking. Milking parameters (vacuum level (VL), pulsation ratio and pulsation rate, vacuum drop), milk emission flow variables (milking time, milk yield, maximum milk flow (MMF), average milk flow (AVMF), time until 500 g/min milk flow is established (TS500)), doe characteristics of 8 to 10 goats/farm (breed, days in milk and parity), milking practices (overmilking, overstripping, pre-lag time) and milking machine characteristics (line height, presence of claw) were recorded on every farm. The relationships between recorded variables and farm were analysed by a one-way ANOVA analysis. The relationships of milk yield, MMF, milking time and TS500 with goat physiology, milking routine, milking parameters and milking machine design were analysed using a linear mixed model, considering the farm as the random effect. Farm was significant (P<0.05) in all the studied variables. Milk emission flow variables were similar to those recommended in scientific studies. Milking parameters were adequate in most of the farms, being similar to those recommended in scientific studies. Few milking parameters and milking machine characteristics affected the tested variables: average vacuum level only showed tendency on MMF, and milk pipeline height on TS500. Milk yield (MY) was mainly affected by parity, as the interaction of days in milk with parity was also significant. Milking time was mainly affected by milk yield and breed. Also significant were parity, the interaction of days in milk with parity and overstripping, whereas overmilking showed a slight tendency. We concluded that most of the studied variables were mainly related to goat physiology characteristics, as the effects of milking parameters and milking machine characteristics were scarce.
Iron deficiency anaemia in Nigerian infants.
Akinkugbe, F M; Ette, S I; Durowoju, T A
1999-01-01
Hematological parameters and the iron status of 50 randomly selected infants who were attending the research infant welfare clinic of the Institute of Child Health, Ibadan (ICHI), for routine immunization were studied. Investigations included estimations of packed cell volume (PCV), haemoglobin (Hb), serum iron (Fe), unsaturated iron-binding capacity (UIBC) and total iron-binding Capacity (TIBC). Forty percent of the infants had PCVs below 0.32, 48% had Hbs below 10 g/dl and 27% had mean corpuscular volume (MVC) less that 70fl. Thirty-seven percent of the children had serum Fe below 3.58 mmol/l, but only 4% had UIBC above 320 mmol/l. Fifty-two percent had Transferin Saturation Index (TSI) below 10%. Eighteen percent had MCV below 70fl associated with TSI below 10% and 67% of these had Hbs below 10 g/dl. The prevalence of iron deficiency anaemia in infants as shown in this study is very high. The ill effects of iron deficiency in childhood have been well documented. It is suggested that screening for anaemia should be offered at 9 months as part of a Child Survival Programme and that infants found to be anaemic should be treated. However, for cost-effectiveness and taking into consideration the high prevalence rate of iron deficiency in this age group, it might be preferable to give iron and weekly prophylactic antimalarias routinely to infants aged 9 to 15 months in lieu of screening.
Kontopantelis, Evangelos; Parisi, Rosa; Springate, David A; Reeves, David
2017-01-13
In modern health care systems, the computerization of all aspects of clinical care has led to the development of large data repositories. For example, in the UK, large primary care databases hold millions of electronic medical records, with detailed information on diagnoses, treatments, outcomes and consultations. Careful analyses of these observational datasets of routinely collected data can complement evidence from clinical trials or even answer research questions that cannot been addressed in an experimental setting. However, 'missingness' is a common problem for routinely collected data, especially for biological parameters over time. Absence of complete data for the whole of a individual's study period is a potential bias risk and standard complete-case approaches may lead to biased estimates. However, the structure of the data values makes standard cross-sectional multiple-imputation approaches unsuitable. In this paper we propose and evaluate mibmi, a new command for cleaning and imputing longitudinal body mass index data. The regression-based data cleaning aspects of the algorithm can be useful when researchers analyze messy longitudinal data. Although the multiple imputation algorithm is computationally expensive, it performed similarly or even better to existing alternatives, when interpolating observations. The mibmi algorithm can be a useful tool for analyzing longitudinal body mass index data, or other longitudinal data with very low individual-level variability.
NASA Astrophysics Data System (ADS)
Murray, Seth C.; Knox, Leighton; Hartley, Brandon; Méndez-Dorado, Mario A.; Richardson, Grant; Thomasson, J. Alex; Shi, Yeyin; Rajan, Nithya; Neely, Haly; Bagavathiannan, Muthukumar; Dong, Xuejun; Rooney, William L.
2016-05-01
The next generation of plant breeding progress requires accurately estimating plant growth and development parameters to be made over routine intervals within large field experiments. Hand measurements are laborious and time consuming and the most promising tools under development are sensors carried by ground vehicles or unmanned aerial vehicles, with each specific vehicle having unique limitations. Previously available ground vehicles have primarily been restricted to monitoring shorter crops or early growth in corn and sorghum, since plants taller than a meter could be damaged by a tractor or spray rig passing over them. Here we have designed two and already constructed one of these self-propelled ground vehicles with adjustable heights that can clear mature corn and sorghum without damage (over three meters of clearance), which will work for shorter row crops as well. In addition to regular RGB image capture, sensor suites are incorporated to estimate plant height, vegetation indices, canopy temperature and photosynthetically active solar radiation, all referenced using RTK GPS to individual plots. These ground vehicles will be useful to validate data collected from unmanned aerial vehicles and support hand measurements taken on plots.
Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay
2012-01-01
This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.
A statistical model of the human core-temperature circadian rhythm
NASA Technical Reports Server (NTRS)
Brown, E. N.; Choe, Y.; Luithardt, H.; Czeisler, C. A.
2000-01-01
We formulate a statistical model of the human core-temperature circadian rhythm in which the circadian signal is modeled as a van der Pol oscillator, the thermoregulatory response is represented as a first-order autoregressive process, and the evoked effect of activity is modeled with a function specific for each circadian protocol. The new model directly links differential equation-based simulation models and harmonic regression analysis methods and permits statistical analysis of both static and dynamical properties of the circadian pacemaker from experimental data. We estimate the model parameters by using numerically efficient maximum likelihood algorithms and analyze human core-temperature data from forced desynchrony, free-run, and constant-routine protocols. By representing explicitly the dynamical effects of ambient light input to the human circadian pacemaker, the new model can estimate with high precision the correct intrinsic period of this oscillator ( approximately 24 h) from both free-run and forced desynchrony studies. Although the van der Pol model approximates well the dynamical features of the circadian pacemaker, the optimal dynamical model of the human biological clock may have a harmonic structure different from that of the van der Pol oscillator.
NASA Astrophysics Data System (ADS)
Sarti, Pierguido; Abbondanza, Claudio; Petrov, Leonid; Negusini, Monia
2011-01-01
The impact of signal path variations (SPVs) caused by antenna gravitational deformations on geodetic very long baseline interferometry (VLBI) results is evaluated for the first time. Elevation-dependent models of SPV for Medicina and Noto (Italy) telescopes were derived from a combination of terrestrial surveying methods to account for gravitational deformations. After applying these models in geodetic VLBI data analysis, estimates of the antenna reference point positions are shifted upward by 8.9 and 6.7 mm, respectively. The impact on other parameters is negligible. To simulate the impact of antenna gravitational deformations on the entire VLBI network, lacking measurements for other telescopes, we rescaled the SPV models of Medicina and Noto for other antennas according to their size. The effects of the simulations are changes in VLBI heights in the range [-3, 73] mm and a net scale increase of 0.3-0.8 ppb. The height bias is larger than random errors of VLBI position estimates, implying the possibility of significant scale distortions related to antenna gravitational deformations. This demonstrates the need to precisely measure gravitational deformations of other VLBI telescopes, to derive their precise SPV models and to apply them in routine geodetic data analysis.
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition
NASA Astrophysics Data System (ADS)
Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.
2016-12-01
Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.
Salvago, Pietro; Rizzo, Serena; Bianco, Antonino; Martines, Francesco
2017-03-01
To investigate the relationship between haematological routine parameters and audiogram shapes in patients affected by sudden sensorineural hearing loss (SSNHL). A retrospective study. All patients were divided into four groups according to the audiometric curve and mean values of haematological parameters (haemoglobin, white blood cell, neutrophils and lymphocytes relative count, platelet count, haematocrit, prothrombin time, activated partial thromboplastin time, fibrinogen and neutrophil-to-lymphocite ratio) of each group were statistically compared. The prognostic role of blood profile and coagulation test was also examined. A cohort of 183 SSNHL patients without comorbidities. With a 48.78% of complete hearing recovery, individuals affected by upsloping hearing loss presented a better prognosis instead of flat (18.36%), downsloping (19.23%) and anacusis (2.45%) groups (p = 0.0001). The multivariate analysis of complete blood count values revealed lower mean percentage of lymphocytes (p = 0.041) and higher platelet levels (p = 0.015) in case of downsloping hearing loss; with the exception of fibrinogen (p = 0.041), none of the main haematological parameters studied resulted associated with poorer prognosis. Our work suggested a lack of association between haematological parameters and a defined audiometric picture in SSNHL patients; furthermore, only fibrinogen seems to influence the prognosis of this disease.
Estimating past and current attendance at winter sports areas...a pilot study
Richard L. Bury; James W. Hall
1963-01-01
Routine business records of towlift tickets or restaurant receipts provided estimates of total attendance over a 2-month period within 8 percent of true attendance, and attendance on an average day within 18 to 24 percent of true attendance. The chances were that estimates would fall within these limits 2 out of 3 times. Guides for field use can be worked out after...
Ratzinger, Franz; Dedeyan, Michel; Rammerstorfer, Matthias; Perkmann, Thomas; Burgmann, Heinz; Makristathis, Athanasios; Dorffner, Georg; Loetsch, Felix; Blacky, Alexander; Ramharter, Michael
2015-01-01
Adequate early empiric antibiotic therapy is pivotal for the outcome of patients with bloodstream infections. In clinical practice the use of surrogate laboratory parameters is frequently proposed to predict underlying bacterial pathogens; however there is no clear evidence for this assumption. In this study, we investigated the discriminatory capacity of predictive models consisting of routinely available laboratory parameters to predict the presence of Gram-positive or Gram-negative bacteremia. Major machine learning algorithms were screened for their capacity to maximize the area under the receiver operating characteristic curve (ROC-AUC) for discriminating between Gram-positive and Gram-negative cases. Data from 23,765 patients with clinically suspected bacteremia were screened and 1,180 bacteremic patients were included in the study. A relative predominance of Gram-negative bacteremia (54.0%), which was more pronounced in females (59.1%), was observed. The final model achieved 0.675 ROC-AUC resulting in 44.57% sensitivity and 79.75% specificity. Various parameters presented a significant difference between both genders. In gender-specific models, the discriminatory potency was slightly improved. The results of this study do not support the use of surrogate laboratory parameters for predicting classes of causative pathogens. In this patient cohort, gender-specific differences in various laboratory parameters were observed, indicating differences in the host response between genders. PMID:26522966
Talbird, Sandra E; Graham, Jonathan; Mauskopf, Josephine; Masseria, Cristina; Krishnarajah, Girishanthy
2015-01-01
The Advisory Committee on Immunization Practices (ACIP) recommends the use of tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap) vaccine for routine wound management in adolescents and adults who require a tetanus toxoid-containing vaccine who were vaccinated ≥ 5 years earlier with tetanus toxoid, reduced diphtheria toxoid (Td) vaccine, and who have not previously received Tdap. To estimate the overall budget and health impact of vaccinating individuals presenting for wound management with Tdap instead of Td vaccine, the current standard of care in practices that do not use Tdap for purposes of wound management. A decision-analytic economic model was developed to estimate the expected increase in direct medical costs and the expected number of cases of pertussis avoided associated with the use of Tdap instead of Td vaccine in the wound management setting. Patients eligible for Tdap were aged 10+ years and required a tetanus-containing vaccine. Age-specific wound incidence data and Td and Tdap vaccination rates were taken from the National Health Interview Survey and the National Immunization Survey for the most recent available year. Age-specific pertussis incidence used in this analysis (151 per 100,000 for adolescents, 366 per 100,000 for those aged 20-64 years, and 176 per 100,000 for those aged 65+ years) used reported incidence rates adjusted by a factor of 10 for adolescents and by a factor of 100 for adults, based on assumptions previously made by ACIP to account for underreporting. Vaccine wholesale acquisition costs without federal excise tax were assumed in the base case. Efficacy of vaccination with Tdap in preventing pertussis was based on clinical trial data. Possible herd immunity effects of vaccination were not included in the model. Costs associated with vaccination and treatment of pertussis cases were reported as total annual costs and per-member-per-month (PMPM) costs for hypothetical health plans and for the U.S. population. Aggregate and incremental costs and pertussis cases avoided were presented undiscounted (as recommended for budget-impact analyses) annually and cumulatively over a 3-year time horizon in 2012 U.S. dollars. Scenario analyses were conducted on key parameters, including wound incidence, pertussis incidence, vaccine efficacy and waning protection against pertussis, uptake rates for Tdap, and vaccine prices using alternative data sources or alternative clinically relevant assumptions. For a health plan with 1 million covered lives aged < 65 years, vaccination with Tdap instead of Td was estimated to cost an additional $132,364 ($0.01 PMPM) in the first year and an additional $368,640 ($0.01 PMPM) cumulatively over 3 years. For a health plan with 1 million covered lives aged 65+ years, vaccination with Tdap instead of Td was estimated to cost an additional $201,165 ($0.02 PMPM) in the first year and an additional $549,568 ($0.02 PMPM) cumulatively over 3 years. For the U.S. population aged 10+ years, vaccination with Tdap instead of Td was estimated to result in protection against pertussis for an additional 2.7 million patients with wounds annually and was estimated to cost an additional $121,101,671 to avoid 42,104 cases of pertussis over the 3-year time horizon. Results were sensitive to input parameter values, particularly parameters associated with the number of patients with wounds vaccinated with Tdap (range 2.7 to 5.1 million patients). However, for all of the alternative scenarios tested, the expected increase in PMPM costs ranged from < $0.01 to $0.03. Vaccination of adolescents and adults with Tdap for wound management may result in an increase in PMPM costs for health plans of < $0.01 to $0.03. Given the potential reduction in pertussis cases at the population level, vaccination with Tdap for routine wound management could be considered as another strategy to help address the pertussis public health concern in the United States.
An Improved Analysis of Forest Carbon Dynamics using Data Assimilation
NASA Technical Reports Server (NTRS)
Williams, Mathew; Schwarz, Paul A.; Law, Beverly E.; Kurpius, Meredith R.
2005-01-01
There are two broad approaches to quantifying landscape C dynamics - by measuring changes in C stocks over time, or by measuring fluxes of C directly. However, these data may be patchy, and have gaps or biases. An alternative approach to generating C budgets has been to use process-based models, constructed to simulate the key processes involved in C exchange. However, the process of model building is arguably subjective, and parameters may be poorly defined. This paper demonstrates why data assimilation (DA) techniques - which combine stock and flux observations with a dynamic model - improve estimates of, and provide insights into, ecosystem carbon (C) exchanges. We use an ensemble Kalman filter (EnKF) to link a series of measurements with a simple box model of C transformations. Measurements were collected at a young ponderosa pine stand in central Oregon over a 3-year period, and include eddy flux and soil C02 efflux data, litterfall collections, stem surveys, root and soil cores, and leaf area index data. The simple C model is a mass balance model with nine unknown parameters, tracking changes in C storage among five pools; foliar, wood and fine root pools in vegetation, and also fresh litter and soil organic matter (SOM) plus coarse woody debris pools. We nested the EnKF within an optimization routine to generate estimates from the data of the unknown parameters and the five initial conditions for the pools. The efficacy of the DA process can be judged by comparing the probability distributions of estimates produced with the EnKF analysis vs. those produced with reduced data or model alone. Using the model alone, estimated net ecosystem exchange of C (NEE)= -251 f 197g Cm-2 over the 3 years, compared with an estimate of -419 f 29gCm-2 when all observations were assimilated into the model. The uncertainty on daily measurements of NEE via eddy fluxes was estimated at 0.5gCm-2 day-1, but the uncertainty on assimilated estimates averaged 0.47 g Cm-2 day-1, and only exceeded 0.5gC m-2 day-1 on days where neither eddy flux nor soil efflux data were available. In generating C budgets, the assimilation process reduced the uncertainties associated with using data or model alone and the forecasts of NEE were statistically unbiased estimates. The results of the analysis emphasize the importance of time series as constraints. Occasional, rare measurements of stocks have limited use in constraining the estimates of other components of the C cycle. Long time series are particularly crucial for improving the analysis of pools with long time constants, such as SOM, woody biomass, and woody debris. Long-running forest stem surveys, and tree ring data, offer a rich resource that could be assimilated to provide an important constraint on C cycling of slow pools. For extending estimates of NEE across regions, DA can play a further important role, by assimilating remote-sensing data into the analysis of C cycles. We show, via sensitivity analysis, how assimilating an estimate of photosynthesis - which might be provided indirectly by remotely sensed data - improves the analysis of NEE.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Immunization against Haemophilus Influenzae Type b in Iran; Cost-utility and Cost-benefit Analyses
Moradi-Lakeh, Maziar; Shakerian, Sareh; Esteghamati, Abdoulreza
2012-01-01
Background: Haemophilus Influenzae type b (Hib) is an important cause of morbidity and mortality in children. Although its burden is considerably preventable by vaccine, routine vaccination against Hib has not been defined in the National Immunization Program of Iran. This study was performed to assess the cost-benefit and cost-utility of running an Hib vaccination program in Iran. Methods: Based on a previous systematic review and meta-analysis for vaccine efficacy, we estimated the averted DALYs (Disability adjusted life years) and cost-benefit of vaccination. Different acute invasive forms of Hib infection and the permanent sequels were considered for estimating the attributed DALYs. We used a societal perspective for economic evaluation and included both direct and indirect costs of alternative options about vaccination. An annual discount rate of 3% and standard age-weighting were used for estimation. To assess the robustness of the results, a sensitivity analysis was performed. Results: The incidence of Hib infection was estimated 43.0 per 100000, which can be reduced to 6.7 by vaccination. Total costs of vaccination were estimated at US$ 15,538,129. Routine vaccination of the 2008 birth cohort would prevent 4079 DALYs at a cost per averted-DALY of US$ 4535. If we consider parents’ loss of income and future productivity loss of children, it would save US$ 8,991,141, with a benefit-cost ratio of 2.14 in the base-case analysis. Sensitivity analysis showed a range of 0.78 to 3.14 for benefit-to-cost ratios. Conclusion: Considering costs per averted DALY, vaccination against Hib is a cost-effective health intervention in Iran, and allocating resources for routine vaccination against Hib seems logical. PMID:22708030
Estimating the coverage of mental health programmes: a systematic review
De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram
2014-01-01
Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Safety aspects of lipidapheresis using DALI and MONET - Multicenter observational study.
Kozik-Jaromin, Justyna; Röseler, Eberhard; Heigl, Franz; Spitthöver, Ralf; Ringel, Jens; Schmitz, Gerd; Heinzler, Rainer; Abdul-Rahman, Nadim; Leistikow, Frank; Himmelsbach, Frido; Schettler, Volker; Uhlenbusch-Körwer, Ingrid; Ramlow, Wolfgang
2017-11-01
Lipidapheresis was introduced for intractable hyperlipidemia as a more selective therapy than plasma exchange aiming to enhance efficacy and limit side-effects. Although this therapy is regarded safe, multicenter data from routine application are limited. We investigated direct adsorption of lipoproteins (DALI) and lipofiltration (MONET) regarding the short and the long-term safety aspects. This multicenter observational study prospectively evaluated 2154 DALI and 1297 MONET sessions of 122 patients during a period of 2 years. Safety parameters included clinical side-effects (adverse device effects, ADEs), technical complications, blood pressure and pulse rate. Also routinely performed laboratory parameters were documented. Analysis of laboratory parameters was not corrected for blood dilution. Overall 0.4% DALI and 0.5% MONET treatments were affected by ADE. Technical complications occurred in 2.1% and in 0.8% DALI and MONET sessions, respectively. The most frequent ADE was hypotension, and the majority of technical problems were related to vascular access. Both types of treatments led to a drop of thrombocytes in the range of 7-8%. Hematocrit and erythrocytes decreased only during the DALI treatments by about 6%. Leucocytes decreased during the DALI therapy (∼15%), whereas they increased during the MONET application (∼11%). MONET treatment was associated with a higher reduction of proteins (fibrinogen: 58% vs. 23%, albumin: 12% vs. 7%, CRP: 33% vs. 19% for MONET and DALI, respectively). Apart from severe thrombocytopenia in two DALI patients, changes of other parameters were typically transient. Under routine use the frequency of side-effects was low. Still, monitoring of blood count and proteins in chronic apheresis patients is recommended. Copyright © 2017. Published by Elsevier B.V.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Clegg, Andrew; Bates, Chris; Young, John; Ryan, Ronan; Nichols, Linda; Ann Teale, Elizabeth; Mohammed, Mohammed A.; Parry, John; Marshall, Tom
2016-01-01
Background: frailty is an especially problematic expression of population ageing. International guidelines recommend routine identification of frailty to provide evidence-based treatment, but currently available tools require additional resource. Objectives: to develop and validate an electronic frailty index (eFI) using routinely available primary care electronic health record data. Study design and setting: retrospective cohort study. Development and internal validation cohorts were established using a randomly split sample of the ResearchOne primary care database. External validation cohort established using THIN database. Participants: patients aged 65–95, registered with a ResearchOne or THIN practice on 14 October 2008. Predictors: we constructed the eFI using the cumulative deficit frailty model as our theoretical framework. The eFI score is calculated by the presence or absence of individual deficits as a proportion of the total possible. Categories of fit, mild, moderate and severe frailty were defined using population quartiles. Outcomes: outcomes were 1-, 3- and 5-year mortality, hospitalisation and nursing home admission. Statistical analysis: hazard ratios (HRs) were estimated using bivariate and multivariate Cox regression analyses. Discrimination was assessed using receiver operating characteristic (ROC) curves. Calibration was assessed using pseudo-R2 estimates. Results: we include data from a total of 931,541 patients. The eFI incorporates 36 deficits constructed using 2,171 CTV3 codes. One-year adjusted HR for mortality was 1.92 (95% CI 1.81–2.04) for mild frailty, 3.10 (95% CI 2.91–3.31) for moderate frailty and 4.52 (95% CI 4.16–4.91) for severe frailty. Corresponding estimates for hospitalisation were 1.93 (95% CI 1.86–2.01), 3.04 (95% CI 2.90–3.19) and 4.73 (95% CI 4.43–5.06) and for nursing home admission were 1.89 (95% CI 1.63–2.15), 3.19 (95% CI 2.73–3.73) and 4.76 (95% CI 3.92–5.77), with good to moderate discrimination but low calibration estimates. Conclusions: the eFI uses routine data to identify older people with mild, moderate and severe frailty, with robust predictive validity for outcomes of mortality, hospitalisation and nursing home admission. Routine implementation of the eFI could enable delivery of evidence-based interventions to improve outcomes for this vulnerable group. PMID:26944937
The impact of lake and reservoir parameterization on global streamflow simulation.
Zajac, Zuzanna; Revilla-Romero, Beatriz; Salamon, Peter; Burek, Peter; Hirpa, Feyera A; Beck, Hylke
2017-05-01
Lakes and reservoirs affect the timing and magnitude of streamflow, and are therefore essential hydrological model components, especially in the context of global flood forecasting. However, the parameterization of lake and reservoir routines on a global scale is subject to considerable uncertainty due to lack of information on lake hydrographic characteristics and reservoir operating rules. In this study we estimated the effect of lakes and reservoirs on global daily streamflow simulations of a spatially-distributed LISFLOOD hydrological model. We applied state-of-the-art global sensitivity and uncertainty analyses for selected catchments to examine the effect of uncertain lake and reservoir parameterization on model performance. Streamflow observations from 390 catchments around the globe and multiple performance measures were used to assess model performance. Results indicate a considerable geographical variability in the lake and reservoir effects on the streamflow simulation. Nash-Sutcliffe Efficiency (NSE) and Kling-Gupta Efficiency (KGE) metrics improved for 65% and 38% of catchments respectively, with median skill score values of 0.16 and 0.2 while scores deteriorated for 28% and 52% of the catchments, with median values -0.09 and -0.16, respectively. The effect of reservoirs on extreme high flows was substantial and widespread in the global domain, while the effect of lakes was spatially limited to a few catchments. As indicated by global sensitivity analysis, parameter uncertainty substantially affected uncertainty of model performance. Reservoir parameters often contributed to this uncertainty, although the effect varied widely among catchments. The effect of reservoir parameters on model performance diminished with distance downstream of reservoirs in favor of other parameters, notably groundwater-related parameters and channel Manning's roughness coefficient. This study underscores the importance of accounting for lakes and, especially, reservoirs and using appropriate parameterization in large-scale hydrological simulations.
Adding source positions to the IVS Combination
NASA Astrophysics Data System (ADS)
Bachmann, S.; Thaller, D.
2016-12-01
Simultaneous estimation of source positions, Earth orientation parameters (EOPs) and station positions in one common adjustment is crucial for a consistent generation of celestial and terrestrial reference frame (CRF and TRF, respectively). VLBI is the only technique to guarantee this consistency. Previous publications showed that the VLBI intra-technique combination could improve the quality of the EOPs and station coordinates compared to the individual contributions. By now, the combination of EOP and station coordinates is well established within the IVS and in combination with other space geodetic techniques (e.g. inter-technique combined TRF like the ITRF). Most of the contributing IVS Analysis Centers (AC) now provide source positions as a third parameter type (besides EOP and station coordinates), which have not been used for an operational combined solution yet. A strategy for the combination of source positions has been developed and integrated into the routine IVS combination. Investigations are carried out to compare the source positions derived from different IVS ACs with the combined estimates to verify whether the source positions are improved by the combination, as it has been proven for EOP and station coordinates. Furthermore, global solutions of source positions, i.e., so-called catalogues describing a CRF, are generated consistently with the TRF similar to the IVS operational combined quarterly solution. The combined solutions of the source positions time series and the consistently generated TRF and CRF are compared internally to the individual solutions of the ACs as well as to external CRF catalogues and TRFs. Additionally, comparisons of EOPs based on different CRF solutions are presented as an outlook for consistent EOP, CRF and TRF realizations.
CARES/Life Ceramics Durability Evaluation Software Enhanced for Cyclic Fatigue
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.
1999-01-01
The CARES/Life computer program predicts the probability of a monolithic ceramic component's failure as a function of time in service. The program has many features and options for materials evaluation and component design. It couples commercial finite element programs--which resolve a component's temperature and stress distribution--to reliability evaluation and fracture mechanics routines for modeling strength-limiting defects. The capability, flexibility, and uniqueness of CARES/Life have attracted many users representing a broad range of interests and has resulted in numerous awards for technological achievements and technology transfer. Recent work with CARES/Life was directed at enhancing the program s capabilities with regards to cyclic fatigue. Only in the last few years have ceramics been recognized to be susceptible to enhanced degradation from cyclic loading. To account for cyclic loads, researchers at the NASA Lewis Research Center developed a crack growth model that combines the Power Law (time-dependent) and the Walker Law (cycle-dependent) crack growth models. This combined model has the characteristics of Power Law behavior (decreased damage) at high R ratios (minimum load/maximum load) and of Walker law behavior (increased damage) at low R ratios. In addition, a parameter estimation methodology for constant-amplitude, steady-state cyclic fatigue experiments was developed using nonlinear least squares and a modified Levenberg-Marquardt algorithm. This methodology is used to give best estimates of parameter values from cyclic fatigue specimen rupture data (usually tensile or flexure bar specimens) for a relatively small number of specimens. Methodology to account for runout data (unfailed specimens over the duration of the experiment) was also included.
NASA Astrophysics Data System (ADS)
Hashim, S.; Karim, M. K. A.; Bakar, K. A.; Sabarudin, A.; Chin, A. W.; Saripan, M. I.; Bradley, D. A.
2016-09-01
The magnitude of radiation dose in computed tomography (CT) depends on the scan acquisition parameters, investigated herein using an anthropomorphic phantom (RANDO®) and thermoluminescence dosimeters (TLD). Specific interest was in the organ doses resulting from CT thorax examination, the specific k coefficient for effective dose estimation for particular protocols also being determined. For measurement of doses representing five main organs (thyroid, lung, liver, esophagus and skin), TLD-100 (LiF:Mg, Ti) were inserted into selected holes in a phantom slab. Five CT thorax protocols were investigated, one routine (R1) and four that were modified protocols (R2 to R5). Organ doses were ranked from greatest to least, found to lie in the order: thyroid>skin>lung>liver>breast. The greatest dose, for thyroid at 25 mGy, was that in use of R1 while the lowest, at 8.8 mGy, was in breast tissue using R3. Effective dose (E) was estimated using three standard methods: the International Commission on Radiological Protection (ICRP)-103 recommendation (E103), the computational phantom CT-EXPO (E(CTEXPO)) method, and the dose-length product (DLP) based approach. E103 k factors were constant for all protocols, 8% less than that of the universal k factor. Due to inconsistency in tube potential and pitch factor the k factors from CTEXPO were found to vary between 0.015 and 0.010 for protocols R3 and R5. With considerable variation between scan acquisition parameters and organ doses, optimization of practice is necessary in order to reduce patient organ dose.
Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.
Li, Qiuju; Pan, Jianxin; Belcher, John
2016-12-01
In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1-5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too. © The Author(s) 2014.
[Chart for estimation of fetal weight 2014 by the French College of Fetal Sonography (CFEF)].
Massoud, M; Duyme, M; Fontanges, M; Combourieu, D
2016-01-01
To establish a reference chart for estimated fetal weight (EFW) using the Hadlock formula based on recent biometric data (2012-2013). A prospective multicentric longitudinal study was carried out. Biometric parameters as the head circumference (HC), abdominal circumference (AC) and the femur length were measured in multiple areas of France from January 2012 until December 2013. EFW was calculated using the predictive formula of Hadlock using three parameters. The accurate gestational age was the main inclusion criteria calculated in weeks of gestation (WG). A polynomial regression approach was used to calculate the mean and standard deviation for every WG adjusted to raw data. Centiles of EFW were calculated from the z score that corresponds to the -1.88, -1.28, 0, +1.28, +1.88 respectively for the 3rd, 10th, 50th, 90th, et 97th percentile in order to establish a new chart of EFW. Measurements were obtained for 33,143 fetus between 17 et 38 WG. Reference charts with the 3rd, 10th, 50th, 90th et 97th percentiles were presented. The reference Chart 2014 is an in utero chart for EFW based on ultrasound measurements data reliable and homogenous from a sample of 33,143 fetus of a general population. It offers a tool to use in routine ultrasound examination for the survey of the fetal growth and to diagnose fetus that are small for gestational age or presenting a restriction in growth. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Optimization of brain PET imaging for a multicentre trial: the French CATI experience.
Habert, Marie-Odile; Marie, Sullivan; Bertin, Hugo; Reynal, Moana; Martini, Jean-Baptiste; Diallo, Mamadou; Kas, Aurélie; Trébossen, Régine
2016-12-01
CATI is a French initiative launched in 2010 to handle the neuroimaging of a large cohort of subjects recruited for an Alzheimer's research program called MEMENTO. This paper presents our test protocol and results obtained for the 22 PET centres (overall 13 different scanners) involved in the MEMENTO cohort. We determined acquisition parameters using phantom experiments prior to patient studies, with the aim of optimizing PET quantitative values to the highest possible per site, while reducing, if possible, variability across centres. Jaszczak's and 3D-Hoffman's phantom measurements were used to assess image spatial resolution (ISR), recovery coefficients (RC) in hot and cold spheres, and signal-to-noise ratio (SNR). For each centre, the optimal reconstruction parameters were chosen as those maximizing ISR and RC without a noticeable decrease in SNR. Point-spread-function (PSF) modelling reconstructions were discarded. The three figures of merit extracted from the images reconstructed with optimized parameters and routine schemes were compared, as were volumes of interest ratios extracted from Hoffman acquisitions. The net effect of the 3D-OSEM reconstruction parameter optimization was investigated on a subset of 18 scanners without PSF modelling reconstruction. Compared to the routine parameters of the 22 PET centres, average RC in the two smallest hot and cold spheres and average ISR remained stable or were improved with the optimized reconstruction, at the expense of slight SNR degradation, while the dispersion of values was reduced. For the subset of scanners without PSF modelling, the mean RC of the smallest hot sphere obtained with the optimized reconstruction was significantly higher than with routine reconstruction. The putamen and caudate-to-white matter ratios measured on 3D-Hoffman acquisitions of all centres were also significantly improved by the optimization, while the variance was reduced. This study provides guidelines for optimizing quantitative results for multicentric PET neuroimaging trials.
Performance evaluation of Abbott CELL-DYN Ruby for routine use.
Lehto, T; Hedberg, P
2008-10-01
CELL-DYN Ruby is a new automated hematology analyzer suitable for routine use in small laboratories and as a back-up or emergency analyzer in medium- to high-volume laboratories. The analyzer was evaluated by comparing the results from the CELL-DYN((R)) Ruby with the results obtained from CELL-DYN Sapphire . Precision, linearity, and carryover between patient samples were also assessed. Precision was good at all levels for the routine cell blood count (CBC) parameters, CV% being
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Carrying Backpacks: Physical Effects
ERIC Educational Resources Information Center
Illinois State Board of Education, 2006
2006-01-01
It is estimated that more than 40 million U.S. youth carry school materials in backs, routinely carrying books, laptop computers, personal and other items used on a daily basis. The Consumer Product Safety Commission (CPSC) estimates that 7,277 emergency visits each year result from injuries related to backpacks. Injury can occur when a child…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-29
... under the Resource Conservation and recovery Act (RCRA); employees working at routine hazardous waste... 10.69 hours per response. Burden means the total time, effort, or financial resources expended by...: Annually. Estimated Total Average Number of Responses for Each Respondent: 1. Estimated Total Annual Hour...
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
Geophysical testing of rock and its relationships to physical properties
DOT National Transportation Integrated Search
2011-02-01
Testing techniques were designed to characterize spatial variability in geotechnical engineering physical parameters of : rock formations. Standard methods using seismic waves, which are routinely used for shallow subsurface : investigation, have lim...
A study of the 3D radiative transfer effect in cloudy atmospheres
NASA Astrophysics Data System (ADS)
Okata, M.; Teruyuki, N.; Suzuki, K.
2015-12-01
Evaluation of the effect of clouds in the atmosphere is a significant problem in the Earth's radiation budget study with their large uncertainties of microphysics and the optical properties. In this situation, we still need more investigations of 3D cloud radiative transer problems using not only models but also satellite observational data.For this purpose, we have developed a 3D-Monte-Carlo radiative transfer code that is implemented with various functions compatible with the OpenCLASTR R-Star radiation code for radiance and flux computation, i.e. forward and backward tracing routines, non-linear k-distribution parameterization (Sekiguchi and Nakajima, 2008) for broad band solar flux calculation, and DM-method for flux and TMS-method for upward radiance (Nakajima and Tnaka 1998). We also developed a Minimum cloud Information Deviation Profiling Method (MIDPM) as a method for a construction of 3D cloud field with MODIS/AQUA and CPR/CloudSat data. We then selected a best-matched radar reflectivity factor profile from the library for each of off-nadir pixels of MODIS where CPR profile is not available, by minimizing the deviation between library MODIS parameters and those at the pixel. In this study, we have used three cloud microphysical parameters as key parameters for the MIDPM, i.e. effective particle radius, cloud optical thickness and top of cloud temperature, and estimated 3D cloud radiation budget. We examined the discrepancies between satellite observed and mode-simulated radiances and three cloud microphysical parameter's pattern for studying the effects of cloud optical and microphysical properties on the radiation budget of the cloud-laden atmospheres.
Simplex GPS and InSAR Inversion Software
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Pierce, Marlon E.
2012-01-01
Changes in the shape of the Earth's surface can be routinely measured with precisions better than centimeters. Processes below the surface often drive these changes and as a result, investigators require models with inversion methods to characterize the sources. Simplex inverts any combination of GPS (global positioning system), UAVSAR (uninhabited aerial vehicle synthetic aperture radar), and InSAR (interferometric synthetic aperture radar) data simultaneously for elastic response from fault and fluid motions. It can be used to solve for multiple faults and parameters, all of which can be specified or allowed to vary. The software can be used to study long-term tectonic motions and the faults responsible for those motions, or can be used to invert for co-seismic slip from earthquakes. Solutions involving estimation of fault motion and changes in fluid reservoirs such as magma or water are possible. Any arbitrary number of faults or parameters can be considered. Simplex specifically solves for any of location, geometry, fault slip, and expansion/contraction of a single or multiple faults. It inverts GPS and InSAR data for elastic dislocations in a half-space. Slip parameters include strike slip, dip slip, and tensile dislocations. It includes a map interface for both setting up the models and viewing the results. Results, including faults, and observed, computed, and residual displacements, are output in text format, a map interface, and can be exported to KML. The software interfaces with the QuakeTables database allowing a user to select existing fault parameters or data. Simplex can be accessed through the QuakeSim portal graphical user interface or run from a UNIX command line.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
Lobb, Eric C
2016-07-08
Version 6.3 of the RITG148+ software package offers eight automated analysis routines for quality assurance of the TomoTherapy platform. A performance evaluation of each routine was performed in order to compare RITG148+ results with traditionally accepted analysis techniques and verify that simulated changes in machine parameters are correctly identified by the software. Reference films were exposed according to AAPM TG-148 methodology for each routine and the RITG148+ results were compared with either alternative software analysis techniques or manual analysis techniques in order to assess baseline agreement. Changes in machine performance were simulated through translational and rotational adjustments to subsequently irradiated films, and these films were analyzed to verify that the applied changes were accurately detected by each of the RITG148+ routines. For the Hounsfield unit routine, an assessment of the "Frame Averaging" functionality and the effects of phantom roll on the routine results are presented. All RITG148+ routines reported acceptable baseline results consistent with alternative analysis techniques, with 9 of the 11 baseline test results showing agreement of 0.1mm/0.1° or better. Simulated changes were correctly identified by the RITG148+ routines within approximately 0.2 mm/0.2° with the exception of the Field Centervs. Jaw Setting routine, which was found to have limited accuracy in cases where field centers were not aligned for all jaw settings due to inaccurate autorotation of the film during analysis. The performance of the RITG148+ software package was found to be acceptable for introduction into our clinical environment as an automated alternative to traditional analysis techniques for routine TomoTherapy quality assurance testing.
Reference values of clinical chemistry and hematology parameters in rhesus monkeys (Macaca mulatta).
Chen, Younan; Qin, Shengfang; Ding, Yang; Wei, Lingling; Zhang, Jie; Li, Hongxia; Bu, Hong; Lu, Yanrong; Cheng, Jingqiu
2009-01-01
Rhesus monkey models are valuable to the studies of human biology. Reference values for clinical chemistry and hematology parameters of rhesus monkeys are required for proper data interpretation. Whole blood was collected from 36 healthy Chinese rhesus monkeys (Macaca mulatta) of either sex, 3 to 5 yr old. Routine chemistry and hematology parameters, and some special coagulation parameters including thromboelastograph and activities of coagulation factors were tested. We presented here the baseline values of clinical chemistry and hematology parameters in normal Chinese rhesus monkeys. These data may provide valuable information for veterinarians and investigators using rhesus monkeys in experimental studies.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
Variation in the costs of delivering routine immunization services in Peru.
Walker, D; Mosqueira, N R; Penny, M E; Lanata, C F; Clark, A D; Sanderson, C F B; Fox-Rushby, J A
2004-09-01
Estimates of vaccination costs usually provide only point estimates at national level with no information on cost variation. In practice, however, such information is necessary for programme managers. This paper presents information on the variations in costs of delivering routine immunization services in three diverse districts of Peru: Ayacucho (a mountainous area), San Martin (a jungle area) and Lima (a coastal area). We consider the impact of variability on predictions of cost and reflect on the likely impact on expected cost-effectiveness ratios, policy decisions and future research practice. All costs are in 2002 prices in US dollars and include the costs of providing vaccination services incurred by 19 government health facilities during the January-December 2002 financial year. Vaccine wastage rates have been estimated using stock records. The cost per fully vaccinated child ranged from 16.63-24.52 U.S. Dollars in Ayacucho, 21.79-36.69 U.S. Dollars in San Martin and 9.58-20.31 U.S. Dollars in Lima. The volume of vaccines administered and wastage rates are determinants of the variation in costs of delivering routine immunization services. This study shows there is considerable variation in the costs of providing vaccines across geographical regions and different types of facilities. Information on how costs vary can be used as a basis from which to generalize to other settings and provide more accurate estimates for decision-makers who do not have disaggregated data on local costs. Future studies should include sufficiently large sample sizes and ensure that regions are carefully selected in order to maximize the interpretation of cost variation.
Bacles, C F E; Bouchard, C; Lange, F; Manicki, A; Tentelier, C; Lepais, O
2018-03-01
This study assesses whether the effective number of breeders (N b ) can be estimated using a time and cost-effective protocol using genetic sibship reconstruction from a single sample of young-of-the-year (YOY) for the purposes of Atlantic salmon Salmo salar population monitoring. N b was estimated for 10 consecutive reproductive seasons for S. salar in the River Nivelle, a small population located at the rear-edge of the species distribution area in France, chronically under its conservation limit and subjected to anthropogenic and environmental changes. Subsampling of real and simulated data showed that accurate estimates of N b can be obtained from YOY genotypes, collected at moderate random sampling intensity, achievable using routine juvenile electrofishing protocols. Spatial bias and time elapsed since spawning were found to affect estimates, which must be accounted for in sampling designs. N b estimated in autumn for S. salar in the River Nivelle was low and variable across years from 23 (95% C.I. 14-41) to 75 (53-101) and was not statistically correlated with the estimated number of returning adults, but it was positively correlated with the estimated number of YOY at age 9 months. N b was found to be lower for intermediate levels of redd aggregation, suggesting that the strength of the competition between males to access females affects reproductive success variance depending on redd spatial configuration. Thus, environmental factors such as habitat availability and quality for spawning and YOY development predominate over demographic ones (number of returning adults) in driving long-term population viability for S. salar in the River Nivelle. This study showcases N b as an integrated parameter, encompassing demographic and ecological information about a reproductive event, relevant to the assessment of both short-term effects of management practices and long-term population conservation status. © 2018 The Fisheries Society of the British Isles.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Control system estimation and design for aerospace vehicles
NASA Technical Reports Server (NTRS)
Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.
1972-01-01
The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
NASA Astrophysics Data System (ADS)
Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.
2016-03-01
The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .
Adaptive Modal Identification for Flutter Suppression Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.
2016-01-01
In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Lutya, Thozama Mandisa
2010-12-01
The United Nations estimates that 79% of teenage girls trafficked globally every year are forced into involuntary prostitution. About 247 000 South African children work in exploitative conditions; about 40 000 South African female teenagers work as prostitutes. This paper investigates lifestyles and routine activities of teenagers at risk of being trafficked for involuntary prostitution. The key concepts involuntary prostitution, intergenerational sex and exploitative conditions are defined in relation to the lifestyles and routine activities of South African female teenagers. Human trafficking for involuntary prostitution is described, based on a literature review. Lifestyle exposure and routine activities theories help to explain the potential victimisation of these teenagers in human trafficking for involuntary prostitution. Actual lifestyle and routine activities of South African teenagers and risky behaviours (substance abuse, intergenerational sex and child prostitution) are discussed as factors that make teens vulnerable to such trafficking. This paper recommends that human trafficking prevention efforts (awareness programmes and information campaigns) be directed at places frequented by human traffickers and teenagers in the absence of a capable guardian to reduce victimisation, as traffickers analyse the lifestyles and routine activities of their targets. South Africa should also interrogate entrenched practices such as intergenerational sex.
Lee, SeokHyun; Cho, KwangHyun; Park, MiNa; Choi, TaeJung; Kim, SiDong; Do, ChangHee
2016-01-01
This study was conducted to estimate the genetic parameters of β-hydroxybutyrate (BHBA) and acetone concentration in milk by Fourier transform infrared spectroscopy along with test-day milk production traits including fat %, protein % and milk yield based on monthly samples of milk obtained as part of a routine milk recording program in Korea. Additionally, the feasibility of using such data in the official dairy cattle breeding system for selection of cows with low susceptibility of ketosis was evaluated. A total of 57,190 monthly test-day records for parities 1, 2, and 3 of 7,895 cows with pedigree information were collected from April 2012 to August 2014 from herds enrolled in the Korea Animal Improvement Association. Multi-trait random regression models were separately applied to estimate genetic parameters of test-day records for each parity. The model included fixed herd test-day effects, calving age and season effects, and random regressions for additive genetic and permanent environmental effects. Abundance of variation of acetone may provide a more sensitive indication of ketosis than many zero observations in concentration of milk BHBA. Heritabilities of milk BHBA levels ranged from 0.04 to 0.17 with a mean of 0.09 for the interval between 4 and 305 days in milk during three lactations. The average heritabilities for milk acetone concentration were 0.29, 0.29, and 0.22 for parities 1, 2, and 3, respectively. There was no clear genetic association of the concentration of two ketone bodies with three test-day milk production traits, even if some correlations among breeding values of the test-day records in this study were observed. These results suggest that genetic selection for low susceptibility of ketosis in early lactation is possible. Further, it is desirable for the breeding scheme of dairy cattle to include the records of milk acetone rather than the records of milk BHBA. PMID:27608643
Hydrologic Modeling and Parameter Estimation under Data Scarcity for Java Island, Indonesia
NASA Astrophysics Data System (ADS)
Yanto, M.; Livneh, B.; Rajagopalan, B.; Kasprzyk, J. R.
2015-12-01
The Indonesian island of Java is routinely subjected to intense flooding, drought and related natural hazards, resulting in severe social and economic impacts. Although an improved understanding of the island's hydrology would help mitigate these risks, data scarcity issues make the modeling challenging. To this end, we developed a hydrological representation of Java using the Variable Infiltration Capacity (VIC) model, to simulate the hydrologic processes of several watersheds across the island. We measured the model performance using Nash-Sutcliffe Efficiency (NSE) at monthly time step. Data scarcity and quality issues for precipitation and streamflow warranted the application of a quality control procedure to data ensure consistency among watersheds resulting in 7 watersheds. To optimize the model performance, the calibration parameters were estimated using Borg Multi Objective Evolutionary Algorithm (Borg MOEA), which offers efficient searching of the parameter space, adaptive population sizing and local optima escape facility. The result shows that calibration performance is best (NSE ~ 0.6 - 0.9) in the eastern part of the domain and moderate (NSE ~ 0.3 - 0.5) in the western part of the island. The validation results are lower (NSE ~ 0.1 - 0.5) and (NSE ~ 0.1 - 0.4) in the east and west, respectively. We surmise that the presence of outliers and stark differences in the climate between calibration and validation periods in the western watersheds are responsible for low NSE in this region. In addition, we found that approximately 70% of total errors were contributed by less than 20% of total data. The spatial variability of model performance suggests the influence of both topographical and hydroclimatic controls on the hydrological processes. Most watersheds in eastern part perform better in wet season and vice versa for the western part. This modeling framework is one of the first attempts at comprehensively simulating the hydrology in this maritime, tropical continent and, offers insights for skillful hydrologic projections crucial for natural hazard mitigation.
Identification of AR(I)MA processes for modelling temporal correlations of GPS observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.
Lee, SeokHyun; Cho, KwangHyun; Park, MiNa; Choi, TaeJung; Kim, SiDong; Do, ChangHee
2016-11-01
This study was conducted to estimate the genetic parameters of β-hydroxybutyrate (BHBA) and acetone concentration in milk by Fourier transform infrared spectroscopy along with test-day milk production traits including fat %, protein % and milk yield based on monthly samples of milk obtained as part of a routine milk recording program in Korea. Additionally, the feasibility of using such data in the official dairy cattle breeding system for selection of cows with low susceptibility of ketosis was evaluated. A total of 57,190 monthly test-day records for parities 1, 2, and 3 of 7,895 cows with pedigree information were collected from April 2012 to August 2014 from herds enrolled in the Korea Animal Improvement Association. Multi-trait random regression models were separately applied to estimate genetic parameters of test-day records for each parity. The model included fixed herd test-day effects, calving age and season effects, and random regressions for additive genetic and permanent environmental effects. Abundance of variation of acetone may provide a more sensitive indication of ketosis than many zero observations in concentration of milk BHBA. Heritabilities of milk BHBA levels ranged from 0.04 to 0.17 with a mean of 0.09 for the interval between 4 and 305 days in milk during three lactations. The average heritabilities for milk acetone concentration were 0.29, 0.29, and 0.22 for parities 1, 2, and 3, respectively. There was no clear genetic association of the concentration of two ketone bodies with three test-day milk production traits, even if some correlations among breeding values of the test-day records in this study were observed. These results suggest that genetic selection for low susceptibility of ketosis in early lactation is possible. Further, it is desirable for the breeding scheme of dairy cattle to include the records of milk acetone rather than the records of milk BHBA.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
Neofytou, Eirini; Sourvinos, George; Asmarianaki, Maria; Spandidos, Demetrios A; Makrigiannakis, Antonios
2009-06-01
To determine the prevalence of herpes viruses in the semen of an asymptomatic male cohort with and without infertility problems and its association with altered semen parameters. A prospective randomized study. Medical school and IVF clinic. One hundred seventy-two male patients undergoing routine semen analysis: 80 with normal semen parameters (control group) and 92 with abnormal semen parameters. Semen samples were collected by masturbation. The DNA from the Herpesviridae family (herpes simplex virus 1 [HSV-1], herpes simplex virus 2 [HSV-2], Varicella zoster virus [VZV], Epstein-Barr virus [EBV], cytomegalovirus [CMV], human herpes virus type 6 [HHV-6], human herpes virus type 7 [HHV-7]) and routine semen parameters. Viral DNA was detected in 143/172 (83.1%) of the total samples for at least one herpes virus: HSV-1, 2.5%; VZV, 1.2%; EBV, 45%; CMV, 62.5%; HHV-6, 70%; HHV-7, 0% in the normal semen samples and HSV-1, 2.1%; VZV, 3.2%; EBV, 39.1%; CMV, 56.5%; HHV-6, 66.3%; HHV-7, 0% in the abnormal semen samples. No association was found between the presence of viral DNA and semen parameters. Interestingly, a statistical significance between leukocytospermia and the presence of EBV DNA was observed. The DNA of herpes viruses is frequently detected in the semen of asymptomatic fertile and infertile male patients. Further studies are required to investigate the role of herpes viruses in male factor infertility.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018
NASA Astrophysics Data System (ADS)
Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale
2014-12-01
Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.
Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne
2012-01-01
AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
Czech results at criticality dosimetry intercomparison 2002.
Frantisek, Spurný; Jaroslav, Trousil
2004-01-01
Two criticality dosimetry systems were tested by Czech participants during the intercomparison held in Valduc, France, June 2002. The first consisted of the thermoluminescent detectors (TLDs) (Al-P glasses) and Si-diodes as passive neutron dosemeters. Second, it was studied to what extent the individual dosemeters used in the Czech routine personal dosimetry service can give a reliable estimation of criticality accident exposure. It was found that the first system furnishes quite reliable estimation of accidental doses. For routine individual dosimetry system, no important problems were encountered in the case of photon dosemeters (TLDs, film badge). For etched track detectors in contact with the 232Th or 235U-Al alloy, the track density saturation for the spark counting method limits the upper dose at approximately 1 Gy for neutrons with the energy >1 MeV.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
Sensitivity Analysis of Down Woody Material Data Processing Routines
Christopher W. Woodall; Duncan C. Lutes
2005-01-01
Weight per unit area (load) estimates of Down Woody Material (DWM) are the most common requests by users of the USDA Forest Service's Forest Inventory and Analysis (FIA) program's DWM inventory. Estimating of DWM loads requires the uniform compilation of DWM transect data for the entire United States. DWM weights may vary by species, level of decay, woody...
Education, Occupational Class, and Unemployment in the Regions of the United Kingdom
ERIC Educational Resources Information Center
Borooah, Vani K.; Mangan, John
2008-01-01
Students in many countries face increased costs of education in the form of direct payments and future tax liabilities and, as a consequence, their education decisions have taken on a greater financial dimension. This has refocused attention on obtaining meaningful estimates of the return to education. Routinely these returns are estimated as the…
Improving Forecast Skill by Assimilation of AIRS Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel; Reale, Oreste
2010-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU-A are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The AIRS Version 5 retrieval algorithm, is now being used operationally at the Goddard DISC in the routine generation of geophysical parameters derived from AIRS/AMSU data. A major innovation in Version 5 is the ability to generate case-by-case level-by-level error estimates delta T(p) for retrieved quantities and the use of these error estimates for Quality Control. We conducted a number of data assimilation experiments using the NASA GEOS-5 Data Assimilation System as a step toward finding an optimum balance of spatial coverage and sounding accuracy with regard to improving forecast skill. The model was run at a horizontal resolution of 0.5 deg. latitude X 0.67 deg longitude with 72 vertical levels. These experiments were run during four different seasons, each using a different year. The AIRS temperature profiles were presented to the GEOS-5 analysis as rawinsonde profiles, and the profile error estimates delta (p) were used as the uncertainty for each measurement in the data assimilation process. We compared forecasts analyses generated from the analyses done by assimilation of AIRS temperature profiles with three different sets of thresholds; Standard, Medium, and Tight. Assimilation of Quality Controlled AIRS temperature profiles significantly improve 5-7 day forecast skill compared to that obtained without the benefit of AIRS data in all of the cases studied. In addition, assimilation of Quality Controlled AIRS temperature soundings performs better than assimilation of AIRS observed radiances. Based on the experiments shown, Tight Quality Control of AIRS temperature profile performs best on the average from the perspective of improving Global 7 day forecast skill.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
Lv, Jun; Huang, Wenjian; Zhang, Jue; Wang, Xiaoying
2018-06-01
In free-breathing multi-b-value diffusion-weighted imaging (DWI), a series of images typically requires several minutes to collect. During respiration the kidney is routinely displaced and may also undergo deformation. These respiratory motion effects generate artifacts and these are the main sources of error in the quantification of intravoxel incoherent motion (IVIM) derived parameters. This work proposes a fully automated framework that combines a kidney segmentation to improve the registration accuracy. 10 healthy subjects were recruited to participate in this experiment. For the segmentation, U-net was adopted to acquire the kidney's contour. The segmented kidney then served as a region of interest (ROI) for the registration method, known as pyramidal Lucas-Kanade. Our proposed framework confines the kidney's solution range, thus increasing the pyramidal Lucas-Kanade's accuracy. To demonstrate the feasibility of our presented framework, eight regions of interest were selected in the cortex and medulla, and data stability was estimated by comparing the normalized root-mean-square error (NRMSE) values of the fitted data from the bi-exponential intravoxel incoherent motion model pre- and post- registration. The results show that the NRMSE was significantly lower after registration both in the cortex (p < 0.05) and medulla (p < 0.01) during free-breathing measurements. In addition, expert visual scoring of the derived apparent diffusion coefficient (ADC), f, D and D* maps indicated there were significant improvements in the alignment of the kidney in the post-registered image. The proposed framework can effectively reduce the motion artifacts of misaligned multi-b-value DWIs and the inaccuracies of the ADC, f, D and D* estimations. Advances in knowledge: This study demonstrates the feasibility of our proposed fully automated framework combining U-net based segmentation and pyramidal Lucas-Kanade registration method for improving the alignment of multi-b-value diffusion-weighted MRIs and reducing the inaccuracy of parameter estimation during free-breathing.
Utilizing Infant Cry Acoustics to Determine Gestational Age.
Sahin, Mustafa; Sahin, Suzan; Sari, Fatma N; Tatar, Emel C; Uras, Nurdan; Oguz, Suna S; Korkmaz, Mehmet H
2017-07-01
The date of last menstruation period and ultrasonography are the most commonly used methods to determine gestational age (GA). However, if these data are not clear, some scoring systems performed after birth can be used. New Ballard Score (NBS) is a commonly used method in estimation of GA. Cry sound may reflect the developmental integrity of the infant. The aim of this study was to evaluate the connection between the infants' GA and some acoustic parameters of the infant cry. A prospective single-blind study was carried out. In this prospective study, medically stable infants without any congenital craniofacial anomalies were evaluated. During routine blood sampling, cry sounds were recorded and acoustic analysis was performed. Step-by-step multiple linear regression analysis was performed. The data of 116 infants (57 female, 59 male) with the known GA (34.6 ± 3.8 weeks) were evaluated and with Apgar score of higher than 5. The real GA was significantly and well correlated with the estimated GA according to the NBS, F0, Int, Jitt, and latency parameters. The obtained stepwise linear regression analysis model was formulized as GA=(31.169) - (0.020 × F0)+(0.286 × GA according to NBS) - (0.003 × Latency)+(0.108 × Int) - (0.367 × Jitt). The real GA could be determined with a ratio of 91.7% using this model. We have determined that after addition of F0, Int, Jitt, and latency to NBS, the power of GA estimation would be increased. This simple formula can be used to determine GA in clinical practice but validity of such prediction formulas needs to be further tested. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Evapotranspiration measurement and modeling without fitting parameters in high-altitude grasslands
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Previati, Maurizio; Canone, Davide; Dematteis, Niccolò; Boetti, Marco; Balocco, Jacopo; Bechis, Stefano
2016-04-01
Mountain grasslands are important, also because one sixth of the world population lives inside watershed dominated by snowmelt. Also, grasslands provide food to both domestic and selvatic animals. The global warming will probably accelerate the hydrological cycle and increase the drought risk. The combination of measurements, modeling and remote sensing can furnish knowledge in such faraway areas (e.g.: Brocca et al., 2013). A better knowledge of water balance can also allow to optimize the irrigation (e.g.: Canone et al., 2015). This work is meant to build a model of water balance in mountain grasslands, ranging between 1500 and 2300 meters asl. The main input is the Digital Terrain Model, which is more reliable in grasslands than both in the woods and in the built environment. It drives the spatial variability of shortwave solar radiation. The other atmospheric forcings are more problematic to estimate, namely air temperature, wind and longwave radiation. Ad hoc routines have been written, in order to interpolate in space the meteorological hourly time variability. The soil hydraulic properties are less variable than in the plains, but the soil depth estimation is still an open issue. The soil vertical variability has been modeled taking into account the main processes: soil evaporation, root uptake, and fractured bedrock percolation. The time variability latent heat flux and soil moisture results have been compared with the data measured in an eddy covariance station. The results are very good, given the fact that the model has no fitting parameters. The space variability results have been compared with the results of a model based on Landsat 7 and 8 data, applied over an area of about 200 square kilometers. The spatial correlation is quite in agreement between the two models. Brocca et al. (2013). "Soil moisture estimation in alpine catchments through modelling and satellite observations". Vadose Zone Journal, 12(3), 10 pp. Canone et al. (2015). "Field measurements based model for surface irrigation efficiency assessment". Agric. Water Manag., 156(1) pp. 30-42
Impact of magnitude uncertainties on seismic catalogue properties
NASA Astrophysics Data System (ADS)
Leptokaropoulos, K. M.; Adamaki, A. K.; Roberts, R. G.; Gkarlaouni, C. G.; Paradisopoulou, P. M.
2018-05-01
Catalogue-based studies are of central importance in seismological research, to investigate the temporal, spatial and size distribution of earthquakes in specified study areas. Methods for estimating the fundamental catalogue parameters like the Gutenberg-Richter (G-R) b-value and the completeness magnitude (Mc) are well established and routinely applied. However, the magnitudes reported in seismicity catalogues contain measurement uncertainties which may significantly distort the estimation of the derived parameters. In this study, we use numerical simulations of synthetic data sets to assess the reliability of different methods for determining b-value and Mc, assuming the G-R law validity. After contaminating the synthetic catalogues with Gaussian noise (with selected standard deviations), the analysis is performed for numerous data sets of different sample size (N). The noise introduced to the data generally leads to a systematic overestimation of magnitudes close to and above Mc. This fact causes an increase of the average number of events above Mc, which in turn leads to an apparent decrease of the b-value. This may result to a significant overestimation of seismicity rate even well above the actual completeness level. The b-value can in general be reliably estimated even for relatively small data sets (N < 1000) when only magnitudes higher than the actual completeness level are used. Nevertheless, a correction of the total number of events belonging in each magnitude class (i.e. 0.1 unit) should be considered, to deal with the magnitude uncertainty effect. Because magnitude uncertainties (here with the form of Gaussian noise) are inevitable in all instrumental catalogues, this finding is fundamental for seismicity rate and seismic hazard assessment analyses. Also important is that for some data analyses significant bias cannot necessarily be avoided by choosing a high Mc value for analysis. In such cases, there may be a risk of severe miscalculation of seismicity rate regardless the selected magnitude threshold, unless possible bias is properly assessed.
Genetic selection for temperament traits in dairy and beef cattle.
Haskell, Marie J; Simm, Geoff; Turner, Simon P
2014-01-01
Animal temperament can be defined as a response to environmental or social stimuli. There are a number of temperament traits in cattle that contribute to their welfare, including their response to handling or milking, response to challenge such as human approach or intervention at calving, and response to conspecifics. In a number of these areas, the genetic basis of the trait has been studied. Heritabilities have been estimated and in some cases quantitative trait loci (QTL) have been identified. The variation is sometimes considerable and moderate heritabilities have been found for the major handling temperament traits, making them amenable to selection. Studies have also investigated the correlations between temperament and other traits, such as productivity and meat quality. Despite this, there are relatively few examples of temperament traits being used in selection programmes. Most often, animals are screened for aggression or excessive fear during handling or milking, with extreme animals being culled, or EBVs for temperament are estimated, but these traits are not commonly included routinely in selection indices, despite there being economic, welfare and human safety drivers for their. There may be a number of constraints and barriers. For some traits and breeds, there may be difficulties in collecting behavioral data on sufficiently large populations of animals to estimate genetic parameters. Most selection indices require estimates of economic values, and it is often difficult to assign an economic value to a temperament trait. The effects of selection primarily for productivity traits on temperament and welfare are discussed. Future opportunities include automated data collection methods and the wider use of genomic information in selection.
Genetic selection for temperament traits in dairy and beef cattle
Haskell, Marie J.; Simm, Geoff; Turner, Simon P.
2014-01-01
Animal temperament can be defined as a response to environmental or social stimuli. There are a number of temperament traits in cattle that contribute to their welfare, including their response to handling or milking, response to challenge such as human approach or intervention at calving, and response to conspecifics. In a number of these areas, the genetic basis of the trait has been studied. Heritabilities have been estimated and in some cases quantitative trait loci (QTL) have been identified. The variation is sometimes considerable and moderate heritabilities have been found for the major handling temperament traits, making them amenable to selection. Studies have also investigated the correlations between temperament and other traits, such as productivity and meat quality. Despite this, there are relatively few examples of temperament traits being used in selection programmes. Most often, animals are screened for aggression or excessive fear during handling or milking, with extreme animals being culled, or EBVs for temperament are estimated, but these traits are not commonly included routinely in selection indices, despite there being economic, welfare and human safety drivers for their. There may be a number of constraints and barriers. For some traits and breeds, there may be difficulties in collecting behavioral data on sufficiently large populations of animals to estimate genetic parameters. Most selection indices require estimates of economic values, and it is often difficult to assign an economic value to a temperament trait. The effects of selection primarily for productivity traits on temperament and welfare are discussed. Future opportunities include automated data collection methods and the wider use of genomic information in selection. PMID:25374582
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
Data-Adaptive Bias-Reduced Doubly Robust Estimation.
Vermeulen, Karel; Vansteelandt, Stijn
2016-05-01
Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.
Raster graphics display library
NASA Technical Reports Server (NTRS)
Grimsrud, Anders; Stephenson, Michael B.
1987-01-01
The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
2012-03-30
hygiene activities of chewing, toothbrushing, and flossing to dental treatment procedures. Of particular 2 significance with regards to bacteremia...from frequent exposure to transient bacteremias associated with daily routine/oral hygiene activities than from bacteremias induced by dental...treatment procedures. It has been estimated that daily routine/oral hygiene activities may cause a bacteremia for 90 hours per month whereas a dental
2018-04-30
operating issue was that the sludge pump routinely clogs. The system operator, Mr. Vick Hasie, was available to answer questions. The ERDC team also...reason for this is that the tank may contain sludge buildup, and at seven feet, entrainment of this sludge could occur. The ERDC team did not review...this time. However, this is cumbersome and potentially dangerous as a routine method since the equalization tank is very high (estimated around 30
Damping of short gravity-capillary waves due to oil derivatives film on the water surface
NASA Astrophysics Data System (ADS)
Sergievskaya, Irina; Ermakov, Stanislav; Lazareva, Tatyana
2016-10-01
In this paper new results of laboratory studies of damping of gravity-capillary waves on the water surface covered by kerosene are presented and compared with our previous analysis of characteristics of crude oil and diesel fuel films. Investigations of kerosene films were carried out in a wide range values of film thicknesses (from some hundreds millimetres to a few millimetres) and in a wide range of surface wave frequencies (from 10 to 27 Hz). The selected frequency range corresponds to the operating wavelengths of microwave, X- to Ka-band radars typically used for the ocean remote sensing. The studied range of film thickness covers typical thicknesses of routine spills in the ocean. It is obtained that characteristics of waves, measured in the presence of oil derivatives films differ from those for crude oil films, in particular, because the volume viscosity of oil derivatives and crude oil is strongly different. To retrieve parameters of kerosene films from the experimental data the surface wave damping was analyzed theoretically in the frame of a model of two-layer fluid. The films are assumed to be soluble, so the elasticity on the upper and lower boundaries is considered as a function of wave frequency. Physical parameters of oil derivative films were estimated when tuning the film parameters to fit theory and experiment. Comparison between wave damping due to crude oil, kerosene and diesel fuel films have shown some capabilities of distinguishing of oil films from remote sensing of short surface waves.
Armando García-Miranda, L; Contreras, I; Estrada, J A
2014-04-01
To determine reference values for full blood count parameters in a population of children 8 to 12 years old, living at an altitude of 2760 m above sea level. Our sample consisted of 102 individuals on whom a full blood count was performed. The parameters included: total number of red blood cells, platelets, white cells, and a differential count (millions/μl and %) of neutrophils, lymphocytes, monocytes, eosinophils and basophils. Additionally, we obtained values for hemoglobin, hematocrit, mean corpuscular volume, mean corpuscular hemoglobin, concentration of corpuscular hemoglobin and red blood cell distribution width. The results were statistically analyzed with a non-parametric test, to divide the sample in quartiles and obtain the lower and upper limits for our intervals. Moreover, the values for the intervals obtained from this analysis were compared to intervals obtained estimating+- 2 standard deviations above and below from our mean values. Our results showed significant differences compared to normal interval values reported for the adult Mexican population in most of the parameters studied. The full blood count is an important laboratory test used routinely for the initial assessment of a patient. Values of full blood counts in healthy individuals vary according to gender, age and geographic location; therefore, each population should have its own reference values. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath
NASA Astrophysics Data System (ADS)
Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping
2018-04-01
Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Real-Time GPS Monitoring for Earthquake Rapid Assessment in the San Francisco Bay Area
NASA Astrophysics Data System (ADS)
Guillemot, C.; Langbein, J. O.; Murray, J. R.
2012-12-01
The U.S. Geological Survey Earthquake Science Center has deployed a network of eight real-time Global Positioning System (GPS) stations in the San Francisco Bay area and is implementing software applications to continuously evaluate the status of the deformation within the network. Real-time monitoring of the station positions is expected to provide valuable information for rapidly estimating source parameters should a large earthquake occur in the San Francisco Bay area. Because earthquake response applications require robust data access, as a first step we have developed a suite of web-based applications which are now routinely used to monitor the network's operational status and data streaming performance. The web tools provide continuously updated displays of important telemetry parameters such as data latency and receive rates, as well as source voltage and temperature information within each instrument enclosure. Automated software on the backend uses the streaming performance data to mitigate the impact of outages, radio interference and bandwidth congestion on deformation monitoring operations. A separate set of software applications manages the recovery of lost data due to faulty communication links. Displacement estimates are computed in real-time for various combinations of USGS, Plate Boundary Observatory (PBO) and Bay Area Regional Deformation (BARD) network stations. We are currently comparing results from two software packages (one commercial and one open-source) used to process 1-Hz data on the fly and produce estimates of differential positions. The continuous monitoring of telemetry makes it possible to tune the network to minimize the impact of transient interruptions of the data flow, from one or more stations, on the estimated positions. Ongoing work is focused on using data streaming performance history to optimize the quality of the position, reduce drift and outliers by switching to the best set of stations within the network, and automatically select the "next best" station to use as reference. We are also working towards minimizing the loss of streamed data during concurrent data downloads by improving file management on the GPS receivers.
Impact and Programmatic Implications of Routine Viral Load Monitoring in Swaziland
Parker, Lucy Anne; Azih, Charles; Okello, Velephi; Maphalala, Gugu; Jouquet, Guillaume; Kerschberger, Bernhard; Mekeidje, Calorine; Cyr, Joanne; Mafikudze, Arnold; Han, Win; Lujan, Johnny; Teck, Roger; Antierens, Annick; van Griensven, Johan; Reid, Tony
2014-01-01
Objective: To assess the programmatic quality (coverage of testing, counseling, and retesting), cost, and outcomes (viral suppression, treatment decisions) of routine viral load (VL) monitoring in Swaziland. Design: Retrospective cohort study of patients undergoing routine VL monitoring in Swaziland (October 1, 2012 to March 31, 2013). Results: Of 5563 patients eligible for routine VL testing monitoring in the period of study, an estimated 4767 patients (86%) underwent testing that year. Of 288 patients with detectable VL, 210 (73%) underwent enhanced adherence counseling and 202 (70%) had a follow-up VL within 6 months. Testing coverage was slightly lower in children, but coverage of retesting was similar between and age groups and sexes. Of those with a follow-up test, 126 (62%) showed viral suppression. The remaining 78 patients had World Health Organization–defined virologic failure; 41 (53%) were referred by the doctor for more adherence counseling, and 13 (15%) were changed to second-line therapy, equating to an estimated rate of 1.2 switches per 100 patient-years. Twenty-four patients (32%) were transferred out, lost to follow-up, or not reviewed by doctor. The “fully loaded” cost of VL monitoring was $35 per patient-year. Conclusions: Achieving good quality VL monitoring is feasible and affordable in resource-limited settings, although close supervision is needed to ensure good coverage of testing and counseling. The low rate of switch to second-line therapy in patients with World Health Organization–defined virologic failure seems to reflect clinician suspicion of ongoing adherence problems. In our study, the main impact of routine VL monitoring was reinforcing adherence rather than increasing use of second-line therapy. PMID:24872139
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Kashima, Saori; Yorifuji, Takashi; Sawada, Norie; Nakaya, Tomoki; Eboshida, Akira
2018-08-01
Typically, land use regression (LUR) models have been developed using campaign monitoring data rather than routine monitoring data. However, the latter have advantages such as low cost and long-term coverage. Based on the idea that LUR models representing regional differences in air pollution and regional road structures are optimal, the objective of this study was to evaluate the validity of LUR models for nitrogen dioxide (NO 2 ) based on routine and campaign monitoring data obtained from an urban area. We selected the city of Suita in Osaka (Japan). We built a model based on routine monitoring data obtained from all sites (routine-LUR-All), and a model based on campaign monitoring data (campaign-LUR) within the city. Models based on routine monitoring data obtained from background sites (routine-LUR-BS) and based on data obtained from roadside sites (routine-LUR-RS) were also built. The routine LUR models were based on monitoring networks across two prefectures (i.e., Osaka and Hyogo prefectures). We calculated the predictability of the each model. We then compared the predicted NO 2 concentrations from each model with measured annual average NO 2 concentrations from evaluation sites. The routine-LUR-All and routine-LUR-BS models both predicted NO 2 concentrations well: adjusted R 2 =0.68 and 0.76, respectively, and root mean square error=3.4 and 2.1ppb, respectively. The predictions from the routine-LUR-All model were highly correlated with the measured NO 2 concentrations at evaluation sites. Although the predicted NO 2 concentrations from each model were correlated, the LUR models based on routine networks, and particularly those based on all monitoring sites, provided better visual representations of the local road conditions in the city. The present study demonstrated that LUR models based on routine data could estimate local traffic-related air pollution in an urban area. The importance and usefulness of data from routine monitoring networks should be acknowledged. Copyright © 2018 Elsevier B.V. All rights reserved.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Adoption of routine telemedicine in Norway: the current picture
Zanaboni, Paolo; Knarvik, Undine; Wootton, Richard
2014-01-01
Background Telemedicine appears to be ready for wider adoption. Although existing research evidence is useful, the adoption of routine telemedicine in healthcare systems has been slow. Objective We conducted a study to explore the current use of routine telemedicine in Norway, at national, regional, and local levels, to provide objective and up-to-date information and to estimate the potential for wider adoption of telemedicine. Design A top-down approach was used to collect official data on the national use of telemedicine from the Norwegian Patient Register. A bottom-up approach was used to collect complementary information on the routine use of telemedicine through a survey conducted at the five largest publicly funded hospitals. Results Results show that routine telemedicine has been adopted in all health regions in Norway and in 68% of hospitals. Despite being widely adopted, the current level of use of telemedicine is low compared to the number of face-to-face visits. Examples of routine telemedicine can be found in several clinical specialties. Most services connect different hospitals in secondary care, and they are mostly delivered as teleconsultations via videoconference. Conclusions Routine telemedicine in Norway has been widely adopted, probably for geographical reasons, as in other settings. However, the level of use of telemedicine in Norway is rather low, and it has significant potential for further development as an alternative to face-to-face outpatient visits. This study is a first attempt to map routine telemedicine at regional, institutional, and clinical levels, and it provides useful information to understand the adoption of telemedicine in routine healthcare and to measure change in future updates. PMID:24433942
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
Estimation of probability of failure for damage-tolerant aerospace structures
NASA Astrophysics Data System (ADS)
Halbert, Keith
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Estimation of liquefaction-induced lateral spread from numerical modeling and its application
NASA Astrophysics Data System (ADS)
Meng, Xianhong
A noncoupled numerical procedure was developed using a scheme of pore water generation that causes shear modulus degradation and shear strength degradation resulting from earthquake cyclic motion. The designed Fast Lagrangian Analysis of Continua (FLAC) model procedure was tested using the liquefaction-induced lateral spread and ground response for Wildlife and Kobe sites. Sixteen well-documented case histories of lateral spread were reviewed and modeled using the modeling procedure. The dynamic residual strength ratios were back-calculated by matching the predicted displacement with the measured lateral spread, or with the displacement predicted by the Yound et al. model. Statistical analysis on the modeling results and soil properties show that most significant parameters governing the residual strength of the liquefied soil are the SPT blow count, fine content and soil particle size of the lateral spread layer. A regression equation was developed to express the residual strength values with these soil properties. Overall, this research demonstrated that a calibrated numerical model can predict the first order effectiveness of liquefaction-induced lateral spread using relatively simple parameters obtained from routine geotechnical investigation. In addition, the model can be used to plan a soil improvement program for cases where liquefaction remediation is needed. This allows the model to be used for design purposes at bridge approaches structured on liquefiable materials.
Practical considerations for volumetric wear analysis of explanted hip arthroplasties.
Langton, D J; Sidaginamale, R P; Holland, J P; Deehan, D; Joyce, T J; Nargol, A V F; Meek, R D; Lord, J K
2014-01-01
Wear debris released from bearing surfaces has been shown to provoke negative immune responses in the recipient. Excessive wear has been linked to early failure of prostheses. Analysis using coordinate measuring machines (CMMs) can provide estimates of total volumetric material loss of explanted prostheses and can help to understand device failure. The accuracy of volumetric testing has been debated, with some investigators stating that only protocols involving hundreds of thousands of measurement points are sufficient. We looked to examine this assumption and to apply the findings to the clinical arena. We examined the effects on the calculated material loss from a ceramic femoral head when different CMM scanning parameters were used. Calculated wear volumes were compared with gold standard gravimetric tests in a blinded study. Various scanning parameters including point pitch, maximum point to point distance, the number of scanning contours or the total number of points had no clinically relevant effect on volumetric wear calculations. Gravimetric testing showed that material loss can be calculated to provide clinically relevant degrees of accuracy. Prosthetic surfaces can be analysed accurately and rapidly with currently available technologies. Given these results, we believe that routine analysis of explanted hip components would be a feasible and logical extension to National Joint Registries. Cite this article: Bone Joint Res 2014;3:60-8.
A combination of routine blood analytes predicts fitness decrement in elderly endurance athletes.
Haslacher, Helmuth; Ratzinger, Franz; Perkmann, Thomas; Batmyagmar, Delgerdalai; Nistler, Sonja; Scherzer, Thomas M; Ponocny-Seliger, Elisabeth; Pilger, Alexander; Gerner, Marlene; Scheichenberger, Vanessa; Kundi, Michael; Endler, Georg; Wagner, Oswald F; Winker, Robert
2017-01-01
Endurance sports are enjoying greater popularity, particularly among new target groups such as the elderly. Predictors of future physical capacities providing a basis for training adaptations are in high demand. We therefore aimed to estimate the future physical performance of elderly marathoners (runners/bicyclists) using a set of easily accessible standard laboratory parameters. To this end, 47 elderly marathon athletes underwent physical examinations including bicycle ergometry and a blood draw at baseline and after a three-year follow-up period. In order to compile a statistical model containing baseline laboratory results allowing prediction of follow-up ergometry performance, the cohort was subgrouped into a model training (n = 25) and a test sample (n = 22). The model containing significant predictors in univariate analysis (alanine aminotransferase, urea, folic acid, myeloperoxidase and total cholesterol) presented with high statistical significance and excellent goodness of fit (R2 = 0.789, ROC-AUC = 0.951±0.050) in the model training sample and was validated in the test sample (ROC-AUC = 0.786±0.098). Our results suggest that standard laboratory parameters could be particularly useful for predicting future physical capacity in elderly marathoners. It hence merits further research whether these conclusions can be translated to other disciplines or age groups.
Jolani, Shahab
2018-03-01
In health and medical sciences, multiple imputation (MI) is now becoming popular to obtain valid inferences in the presence of missing data. However, MI of clustered data such as multicenter studies and individual participant data meta-analysis requires advanced imputation routines that preserve the hierarchical structure of data. In clustered data, a specific challenge is the presence of systematically missing data, when a variable is completely missing in some clusters, and sporadically missing data, when it is partly missing in some clusters. Unfortunately, little is known about how to perform MI when both types of missing data occur simultaneously. We develop a new class of hierarchical imputation approach based on chained equations methodology that simultaneously imputes systematically and sporadically missing data while allowing for arbitrary patterns of missingness among them. Here, we use a random effect imputation model and adopt a simplification over fully Bayesian techniques such as Gibbs sampler to directly obtain draws of parameters within each step of the chained equations. We justify through theoretical arguments and extensive simulation studies that the proposed imputation methodology has good statistical properties in terms of bias and coverage rates of parameter estimates. An illustration is given in a case study with eight individual participant datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kukreti, B M; Sharma, G K
2012-05-01
Accurate and speedy estimations of ppm range uranium and thorium in the geological and rock samples are most useful towards ongoing uranium investigations and identification of favorable radioactive zones in the exploration field areas. In this study with the existing 5 in. × 4 in. NaI(Tl) detector setup, prevailing background and time constraints, an enhanced geometrical setup has been worked out to improve the minimum detection limits for primordial radioelements K(40), U(238) and Th(232). This geometrical setup has been integrated with the newly introduced, digital signal processing based MCA system for the routine spectrometric analysis of low concentration rock samples. Stability performance, during the long counting hours, for digital signal processing MCA system and its predecessor NIM bin based MCA system has been monitored, using the concept of statistical process control. Monitored results, over a time span of few months, have been quantified in terms of spectrometer's parameters such as Compton striping constants and Channel sensitivities, used for evaluating primordial radio element concentrations (K(40), U(238) and Th(232)) in geological samples. Results indicate stable dMCA performance, with a tendency of higher relative variance, about mean, particularly for Compton stripping constants. Copyright © 2012 Elsevier Ltd. All rights reserved.
A combination of routine blood analytes predicts fitness decrement in elderly endurance athletes
Ratzinger, Franz; Perkmann, Thomas; Batmyagmar, Delgerdalai; Nistler, Sonja; Scherzer, Thomas M.; Ponocny-Seliger, Elisabeth; Pilger, Alexander; Gerner, Marlene; Scheichenberger, Vanessa; Kundi, Michael; Endler, Georg; Wagner, Oswald F.; Winker, Robert
2017-01-01
Endurance sports are enjoying greater popularity, particularly among new target groups such as the elderly. Predictors of future physical capacities providing a basis for training adaptations are in high demand. We therefore aimed to estimate the future physical performance of elderly marathoners (runners/bicyclists) using a set of easily accessible standard laboratory parameters. To this end, 47 elderly marathon athletes underwent physical examinations including bicycle ergometry and a blood draw at baseline and after a three-year follow-up period. In order to compile a statistical model containing baseline laboratory results allowing prediction of follow-up ergometry performance, the cohort was subgrouped into a model training (n = 25) and a test sample (n = 22). The model containing significant predictors in univariate analysis (alanine aminotransferase, urea, folic acid, myeloperoxidase and total cholesterol) presented with high statistical significance and excellent goodness of fit (R2 = 0.789, ROC-AUC = 0.951±0.050) in the model training sample and was validated in the test sample (ROC-AUC = 0.786±0.098). Our results suggest that standard laboratory parameters could be particularly useful for predicting future physical capacity in elderly marathoners. It hence merits further research whether these conclusions can be translated to other disciplines or age groups. PMID:28475643
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
CAPRI: A Geometric Foundation for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
2006-01-01
CAPRI is a software building tool-kit that refers to two ideas; (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. A complete definition of the geometry and application programming interface can be found in the document CAPRI: Computational Analysis PRogramming Interface appended to this report. In summary the interface is subdivided into the following functional components: 1. Utility routines -- These routines include the initialization of CAPRI, loading CAD parts and querying the operational status as well as closing the system down. 2. Geometry data-base queries -- This group of functions allow all top level applications to figure out and get detailed information on any geometric component in the Volume definition. 3. Point queries -- These calls allow grid generators, or solvers doing node adaptation, to snap points directly onto geometric entities. 4. Calculated or geometrically derived queries -- These entry points calculate data from the geometry to aid in grid generation. 5. Boundary data routines -- This part of CAPRI allows general data to be attached to Boundaries so that the boundary conditions can be specified and stored within CAPRI s data-base. 6. Tag based routines -- This part of the API allows the specification of properties associated with either the Volume (material properties) or Boundary (surface properties) entities. 7. Geometry based interpolation routines -- This part of the API facilitates Multi-disciplinary coupling and allows zooming through Boundary Attachments. 8. Geometric creation and manipulation -- These calls facilitate constructing simple solid entities and perform the Boolean solid operations. Geometry constructed in this manner has the advantage that if the data is kept consistent with the CAD package, therefore a new design can be incorporated directly and is manufacturable. 9. Master Model access This addition to the API allows for the querying of the parameters and dimensions of the model. The feature tree is also exposed so it is easy to see where the parameters are applied. Calls exist to allow for the modification of the parameters and the suppression/unsuppression of nodes in the tree. Part regeneration is performed by a single API call and a new part becomes available within CAPRI (if the regeneration was successful). This is described in a separate document. Components 1-7 are considered the CAPRI base level reader.
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Transit Project Planning Guidance : Estimation of Transit Supply Parameters
DOT National Transportation Integrated Search
1984-04-01
This report discusses techniques applicable to the estimation of transit vehicle fleet requirements, vehicle-hours and vehicle-miles, and other related transit supply parameters. These parameters are used for estimating operating costs and certain ca...
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.
Zi, Zhike; Klipp, Edda
2006-11-01
The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Keser, Ilke; Kirdi, Nuray; Meric, Aydin; Kurne, Asli Tuncer; Karabudak, Rana
2013-01-01
This study compared trunk exercises based on the Bobath concept with routine neurorehabilitation approaches in multiple sclerosis (MS). Bobath and routine neurorehabilitation exercises groups were evaluated. MS cases were divided into two groups. Both groups joined a 3 d/wk rehabilitation program for 8 wk. The experimental group performed trunk exercises based on the Bobath concept, and the control group performed routine neurorehabilitation exercises. Additionally, both groups performed balance and coordination exercises. All patients were evaluated with the Trunk Impairment Scale (TIS), Berg Balance Scale (BBS), International Cooperative Ataxia Rating Scale (ICARS), and Multiple Sclerosis Functional Composite (MSFC) before and after the physiotherapy program. In group analysis, TIS, BBS, ICARS, and MSFC scores and strength of abdominal muscles were significantly different after treatment in both groups (p < 0.05). When the groups were compared, no significant differences were found in any parameters (p > 0.05). Although trunk exercises based on the Bobath concept are rarely applied in MS rehabilitation, the results of this study show that they are as effective as routine neurorehabilitation exercises. Therefore, trunk exercises based on the Bobath concept can be beneficial in MS rehabilitation programs.