Gaussian Process Model for Antarctic Surface Mass Balance and Ice Core Site Selection
NASA Astrophysics Data System (ADS)
White, P. A.; Reese, S.; Christensen, W. F.; Rupper, S.
2017-12-01
Surface mass balance (SMB) is an important factor in the estimation of sea level change, and data are collected to estimate models for prediction of SMB on the Antarctic ice sheet. Using Favier et al.'s (2013) quality-controlled aggregate data set of SMB field measurements, a fully Bayesian spatial model is posed to estimate Antarctic SMB and propose new field measurement locations. Utilizing Nearest-Neighbor Gaussian process (NNGP) models, SMB is estimated over the Antarctic ice sheet. An Antarctic SMB map is rendered using this model and is compared with previous estimates. A prediction uncertainty map is created to identify regions of high SMB uncertainty. The model estimates net SMB to be 2173 Gton yr-1 with 95% credible interval (2021,2331) Gton yr-1. On average, these results suggest lower Antarctic SMB and higher uncertainty than previously purported [Vaughan et al. (1999); Van de Berg et al. (2006); Arthern, Winebrenner and Vaughan (2006); Bromwich et al. (2004); Lenaerts et al. (2012)], even though this model utilizes significantly more observations than previous models. Using the Gaussian process' uncertainty and model parameters, we propose 15 new measurement locations for field study utilizing a maximin space-filling, error-minimizing design; these potential measurements are identied to minimize future estimation uncertainty. Using currently accepted Antarctic mass balance estimates and our SMB estimate, we estimate net mass loss [Shepherd et al. (2012); Jacob et al. (2012)]. Furthermore, we discuss modeling details for both space-time data and combining field measurement data with output from mathematical models using the NNGP framework.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.
Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N
2017-04-01
There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.
NASA Astrophysics Data System (ADS)
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2017-09-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION
An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...
Revisiting the Table 2 fallacy: A motivating example examining preeclampsia and preterm birth.
Bandoli, Gretchen; Palmsten, Kristin; Chambers, Christina D; Jelliffe-Pawlowski, Laura L; Baer, Rebecca J; Thompson, Caroline A
2018-05-21
A "Table Fallacy," as coined by Westreich and Greenland, reports multiple adjusted effect estimates from a single model. This practice, which remains common in published literature, can be problematic when different types of effect estimates are presented together in a single table. The purpose of this paper is to quantitatively illustrate this potential for misinterpretation with an example estimating the effects of preeclampsia on preterm birth. We analysed a retrospective population-based cohort of 2 963 888 singleton births in California between 2007 and 2012. We performed a modified Poisson regression to calculate the total effect of preeclampsia on the risk of PTB, adjusting for previous preterm birth. pregnancy alcohol abuse, maternal education, and maternal socio-demographic factors (Model 1). In subsequent models, we report the total effects of previous preterm birth, alcohol abuse, and education on the risk of PTB, comparing and contrasting the controlled direct effects, total effects, and confounded effect estimates, resulting from Model 1. The effect estimate for previous preterm birth (a controlled direct effect in Model 1) increased 10% when estimated as a total effect. The risk ratio for alcohol abuse, biased due to an uncontrolled confounder in Model 1, was reduced by 23% when adjusted for drug abuse. The risk ratio for maternal education, solely a predictor of the outcome, was essentially unchanged. Reporting multiple effect estimates from a single model may lead to misinterpretation and lack of reproducibility. This example highlights the need for careful consideration of the types of effects estimated in statistical models. © 2018 John Wiley & Sons Ltd.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
NASA Technical Reports Server (NTRS)
Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo
2004-01-01
This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Freight Modal Split: Estimation Results and Model Implementation
DOT National Transportation Integrated Search
2001-07-31
This report, as a follow-up to the previous report, presents the results of the model estimation task. The final commodity-specific modal split models are presented, followed by a discussion of their implications. These models are embedded within a l...
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella
2017-01-01
Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....
Cramer, C.H.
2006-01-01
The Mississippi embayment, located in the central United States, and its thick deposits of sediments (over 1 km in places) have a large effect on earthquake ground motions. Several previous studies have addressed how these thick sediments might modify probabilistic seismic-hazard maps. The high seismic hazard associated with the New Madrid seismic zone makes it particularly important to quantify the uncertainty in modeling site amplification to better represent earthquake hazard in seismic-hazard maps. The methodology of the Memphis urban seismic-hazard-mapping project (Cramer et al., 2004) is combined with the reference profile approach of Toro and Silva (2001) to better estimate seismic hazard in the Mississippi embayment. Improvements over previous approaches include using the 2002 national seismic-hazard model, fully probabilistic hazard calculations, calibration of site amplification with improved nonlinear soil-response estimates, and estimates of uncertainty. Comparisons are made with the results of several previous studies, and estimates of uncertainty inherent in site-amplification modeling for the upper Mississippi embayment are developed. I present new seismic-hazard maps for the upper Mississippi embayment with the effects of site geology incorporating these uncertainties.
Karakas, Filiz; Imamoglu, Ipek
2017-04-01
This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
An optimal pole-matching observer design for estimating tyre-road friction force
NASA Astrophysics Data System (ADS)
Faraji, Mohammad; Johari Majd, Vahid; Saghafi, Behrooz; Sojoodi, Mahdi
2010-10-01
In this paper, considering the dynamical model of tyre-road contacts, we design a nonlinear observer for the on-line estimation of tyre-road friction force using the average lumped LuGre model without any simplification. The design is the extension of a previously offered observer to allow a muchmore realistic estimation by considering the effect of the rolling resistance and a term related to the relative velocity in the observer. Our aim is not to introduce a new friction model, but to present a more accurate nonlinear observer for the assumed model. We derive linear matrix equality conditions to obtain an observer gain with minimum pole mismatch for the desired observer error dynamic system. We prove the convergence of the observer for the non-simplified model. Finally, we compare the performance of the proposed observer with that of the previously mentioned nonlinear observer, which shows significant improvement in the accuracy of estimation.
NASA Technical Reports Server (NTRS)
Smrekar, S. E.; Anderson, F. S.
2005-01-01
We have calculated admittance spectra using the spatio-spectral method [14] for Venus by moving the central location of the spectrum over a 1 grid, create 360x180 admittance spectra. We invert the observed admittance using top-loading (TL), hot spot (HS), and bottom loading (BL) models, resulting in elastic, crustal, and lithospheric thickness estimates (Te, Zc, and Zl) [0]. The result is a global map for interpreting subsurface structure. Estimated values of Te and Zc concur with previous TL local admittance results, but BL estimates indicate larger values than previously suspected.
On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.
1992-01-01
We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Effects of linking a soil-water-balance model with a groundwater-flow model
Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.
2013-01-01
A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.
Using open robust design models to estimate temporary emigration from capture-recapture data.
Kendall, W L; Bjorkland, R
2001-12-01
Capture-recapture studies are crucial in many circumstances for estimating demographic parameters for wildlife and fish populations. Pollock's robust design, involving multiple sampling occasions per period of interest, provides several advantages over classical approaches. This includes the ability to estimate the probability of being present and available for detection, which in some situations is equivalent to breeding probability. We present a model for estimating availability for detection that relaxes two assumptions required in previous approaches. The first is that the sampled population is closed to additions and deletions across samples within a period of interest. The second is that each member of the population has the same probability of being available for detection in a given period. We apply our model to estimate survival and breeding probability in a study of hawksbill sea turtles (Eretmochelys imbricata), where previous approaches are not appropriate.
Using open robust design models to estimate temporary emigration from capture-recapture data
Kendall, W.L.; Bjorkland, R.
2001-01-01
Capture-recapture studies are crucial in many circumstances for estimating demographic parameters for wildlife and fish populations. Pollock's robust design, involving multiple sampling occasions per period of interest, provides several advantages over classical approaches. This includes the ability to estimate the probability of being present and available for detection, which in some situations is equivalent to breeding probability. We present a model for estimating availability for detection that relaxes two assumptions required in previous approaches. The first is that the sampled population is closed to additions and deletions across samples within a period of interest. The second is that each member of the population has the same probability of being available for detection in a given period. We apply our model to estimate survival and breeding probability in a study of hawksbill sea turtles (Eretmochelys imbricata), where previous approaches are not appropriate.
Aylward, Lesa L; Brunet, Robert C; Starr, Thomas B; Carrier, Gaétan; Delzell, Elizabeth; Cheng, Hong; Beall, Colleen
2005-08-01
Recent studies demonstrating a concentration dependence of elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) suggest that previous estimates of exposure for occupationally exposed cohorts may have underestimated actual exposure, resulting in a potential overestimate of the carcinogenic potency of TCDD in humans based on the mortality data for these cohorts. Using a database on U.S. chemical manufacturing workers potentially exposed to TCDD compiled by the National Institute for Occupational Safety and Health (NIOSH), we evaluated the impact of using a concentration- and age-dependent elimination model (CADM) (Aylward et al., 2005) on estimates of serum lipid area under the curve (AUC) for the NIOSH cohort. These data were used previously by Steenland et al. (2001) in combination with a first-order elimination model with an 8.7-year half-life to estimate cumulative serum lipid concentration (equivalent to AUC) for these workers for use in cancer dose-response assessment. Serum lipid TCDD measurements taken in 1988 for a subset of the cohort were combined with the NIOSH job exposure matrix and work histories to estimate dose rates per unit of exposure score. We evaluated the effect of choices in regression model (regression on untransformed vs. ln-transformed data and inclusion of a nonzero regression intercept) as well as the impact of choices of elimination models and parameters on estimated AUCs for the cohort. Central estimates for dose rate parameters derived from the serum-sampled subcohort were applied with the elimination models to time-specific exposure scores for the entire cohort to generate AUC estimates for all cohort members. Use of the CADM resulted in improved model fits to the serum sampling data compared to the first-order models. Dose rates varied by a factor of 50 among different combinations of elimination model, parameter sets, and regression models. Use of a CADM results in increases of up to five-fold in AUC estimates for the more highly exposed members of the cohort compared to estimates obtained using the first-order model with 8.7-year half-life. This degree of variation in the AUC estimates for this cohort would affect substantially the cancer potency estimates derived from the mortality data from this cohort. Such variability and uncertainty in the reconstructed serum lipid AUC estimates for this cohort, depending on elimination model, parameter set, and regression model, have not been described previously and are critical components in evaluating the dose-response data from the occupationally exposed populations.
NASA Astrophysics Data System (ADS)
Chen, Shimon; Bekhor, Shlomo; Yuval; Broday, David M.
2016-10-01
Most air quality models use traffic-related variables as an input. Previous studies estimated nearby vehicular activity through sporadic traffic counts or via traffic assignment models. Both methods have previously produced poor or no data for nights, weekends and holidays. Emerging technologies allow the estimation of traffic through passive monitoring of location-aware devices. Examples of such devices are GPS transceivers installed in vehicles. In this work, we studied traffic volumes that were derived from such data. Additionally, we used these data for estimating ambient nitrogen dioxide concentrations, using a non-linear optimisation model that includes basic dispersion properties. The GPS-derived data show great potential for use as a proxy for pollutant emissions from motor-vehicles.
ELEMENT MASSES IN THE CRAB NEBULA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.
Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii] λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.
Comparison of Past, Present, and Future Volume Estimation Methods for Tennessee
Stanley J. Zarnoch; Alexander Clark; Ray A. Souter
2003-01-01
Forest Inventory and Analysis 1999 survey data for Tennessee were used to compare stem-volume estimates obtained using a previous method, the current method, and newly developed taper models that will be used in the future. Compared to the current method, individual tree volumes were consistently underestimated with the previous method, especially for the hardwoods....
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Estimating actual evapotranspiration for forested sites: modifications to the Thornthwaite Model
Randall K. Kolka; Ann T. Wolf
1998-01-01
A previously coded version of the Thornthwaite water balance model was used to estimate annual actual evapotranspiration (AET) for 29 forested sites between 1900 and 1993 in the Upper Great Lakes area. Approximately 8 percent of the data sets calculated AET in error. Errors were detected in months when estimated AET was greater than potential evapotranspiration. Annual...
Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description
NASA Technical Reports Server (NTRS)
Hemm, Robert; Shapiro, Gerald
1998-01-01
This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.
Jafari, Zohreh; Edrisi, Mehdi; Marateb, Hamid Reza
2014-01-01
The purpose of this study was to estimate the torque from high-density surface electromyography signals of biceps brachii, brachioradialis, and the medial and lateral heads of triceps brachii muscles during moderate-to-high isometric elbow flexion-extension. The elbow torque was estimated in two following steps: First, surface electromyography (EMG) amplitudes were estimated using principal component analysis, and then a fuzzy model was proposed to illustrate the relationship between the EMG amplitudes and the measured torque signal. A neuro-fuzzy method, with which the optimum number of rules could be estimated, was used to identify the model with suitable complexity. Utilizing the proposed neuro-fuzzy model, the clinical interpretability was introduced; contrary to the previous linear and nonlinear black-box system identification models. It also reduced the estimation error compared with that of the most recent and accurate nonlinear dynamic model introduced in the literature. The optimum number of the rules for all trials was 4 ± 1, that might be related to motor control strategies and the % variance accounted for criterion was 96.40 ± 3.38 which in fact showed considerable improvement compared with the previous methods. The proposed method is thus a promising new tool for EMG-Torque modeling in clinical applications. PMID:25426427
Army College Fund Cost-Effectiveness Study
1990-11-01
Section A.2 presents a theory of enlistment supply to provide a basis for specifying the regression model , The model Is specified in Section A.3, which...Supplementary materials are included in the final four sections. Section A.6 provides annual trends in the regression model variables. Estimates of the model ...millions, A.S. ESTIMATION OF A YOUTH EARNINGS FORECASTING MODEL Civilian pay is an important explanatory variable in the regression model . Previous
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
A New Estimate of North American Mountain Snow Accumulation From Regional Climate Model Simulations
NASA Astrophysics Data System (ADS)
Wrzesien, Melissa L.; Durand, Michael T.; Pavelsky, Tamlin M.; Kapnick, Sarah B.; Zhang, Yu; Guo, Junyi; Shum, C. K.
2018-02-01
Despite the importance of mountain snowpack to understanding the water and energy cycles in North America's montane regions, no reliable mountain snow climatology exists for the entire continent. We present a new estimate of mountain snow water equivalent (SWE) for North America from regional climate model simulations. Climatological peak SWE in North America mountains is 1,006 km3, 2.94 times larger than previous estimates from reanalyses. By combining this mountain SWE value with the best available global product in nonmountain areas, we estimate peak North America SWE of 1,684 km3, 55% greater than previous estimates. In our simulations, the date of maximum SWE varies widely by mountain range, from early March to mid-April. Though mountains comprise 24% of the continent's land area, we estimate that they contain 60% of North American SWE. This new estimate is a suitable benchmark for continental- and global-scale water and energy budget studies.
Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain
NASA Astrophysics Data System (ADS)
Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa
2017-10-01
Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.
Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, C.; Casentini, J.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vano-Vinuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Brügmann, B.; Campanelli, M.; Chu, T.; Clark, M.; Haas, R.; Hemberger, D.; Hinder, I.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Pan, Y.; Röver, C.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration
2016-10-01
This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35-3+5 M⊙ and 3 0-4+3 M⊙ (where errors correspond to 90% symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate <0.65 and a secondary spin estimate <0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.
Preliminary Multivariable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored
Current and estimated future atmospheric nitrogen loads to the Chesapeake Bay Watershed
Nitrogen deposition for CMAQ scenarios in 2011, 2017, 2023, 2028, and a 2048-2050 RCP 4.5 climate scenario will be presented for the watershed and tidal waters. Comparisons will be made with the 2017 Airshed Model to the previous 2010 Airshed Model estimates. In addition, atmosph...
Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)
2001-01-01
In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.
Estimation of sojourn time in chronic disease screening without data on interval cases.
Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W
2000-03-01
Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
Estimation of daily minimum land surface air temperature using MODIS data in southern Iran
NASA Astrophysics Data System (ADS)
Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza
2017-11-01
Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.
Medalie, Laura
2016-12-20
The U.S. Geological Survey, in cooperation with the New England Interstate Water Pollution Control Commission and the Vermont Department of Environmental Conservation, estimated daily and 9-month concentrations and fluxes of total and dissolved phosphorus, total nitrogen, chloride, and total suspended solids from 1990 (or first available date) through 2014 for 18 tributaries of Lake Champlain. Estimates of concentration and flux, provided separately in Medalie (2016), were made by using the Weighted Regressions on Time, Discharge, and Season (WRTDS) regression model and update previously published WRTDS model results with recent data. Assessment of progress towards meeting phosphorus-reduction goals outlined in the Lake Champlain management plan relies on annual estimates of phosphorus flux. The percent change in annual concentration and flux is provided for two time periods. The R package EGRETci was used to estimate the uncertainty of the trend estimate. Differences in model specification and function between this study and previous studies that used WRTDS to estimate concentration and flux using data from Lake Champlain tributaries are described. Winter data were too sparse and nonrepresentative to use for estimates of concentration and flux but were sufficient for estimating the percentage of total annual flux over the period of record. Median winter-to-annual fractions ranged between 21 percent for total suspended solids and 27 percent for dissolved phosphorus. The winter contribution was largest for all constituents from the Mettawee River and smallest from the Ausable River. For the full record (1991 through 2014 for total and dissolved phosphorus and chloride and 1993 through 2014 for nitrogen and total suspended solids), 6 tributaries had decreasing trends in concentrations of total phosphorus, and 12 had increasing trends; concentrations of dissolved phosphorus decreased in 6 and increased in 8 tributaries; fluxes of total phosphorus decreased in 5 and increased in 10 tributaries; and fluxes of dissolved phosphorus decreased in 4 and increased in 10 tributaries (where the number of increasing and decreasing trends does not add up to 18, the remainder of tributaries had no trends). Concentrations and fluxes of nitrogen decreased in 10 and increased in 4 tributaries and of chloride decreased in 2 and increased in 15 tributaries. Concentrations of total suspended solids decreased in 4 and increased in 8 tributaries, and fluxes of total suspended solids decreased in 3 and increased in 11 tributaries. Although time intervals for the percent changes from this report are not completely synchronous with those from previous studies, the numbers of and specific tributaries with overall negative percent changes in concentration and flux are similar. Concentration estimates of total phosphorus in the Winooski River were used to trace whether changes in trends between a previous study and the current study were due generally to differences in model specifications or differences from 4 years of additional data. The Winooski River analysis illustrates several things: that keeping all model specifications equal, concentration estimates increased from 2010 to 2014; the effects of a smoothing algorithm used in the current study that was not available previously; that narrowing model half-window widths increased year-to-year variations; and that the change from an annual to a 9-month basis by omitting winter estimates changed a few individual points but not the overall shape of the flow-normalized curve. Similar tests for other tributaries showed that the primary effect of differences in model specifications between the previous and current studies was perhaps to increase scatter over time but that changes in trends generally were the result of 4 years of additional data rather than artifacts of model differences.
Modelling HIV/AIDS epidemics in sub-Saharan Africa using seroprevalence data from antenatal clinics.
Salomon, J. A.; Murray, C. J.
2001-01-01
OBJECTIVE: To improve the methodological basis for modelling the HIV/AIDS epidemics in adults in sub-Saharan Africa, with examples from Botswana, Central African Republic, Ethiopia, and Zimbabwe. Understanding the magnitude and trajectory of the HIV/AIDS epidemic is essential for planning and evaluating control strategies. METHODS: Previous mathematical models were developed to estimate epidemic trends based on sentinel surveillance data from pregnant women. In this project, we have extended these models in order to take full advantage of the available data. We developed a maximum likelihood approach for the estimation of model parameters and used numerical simulation methods to compute uncertainty intervals around the estimates. FINDINGS: In the four countries analysed, there were an estimated half a million new adult HIV infections in 1999 (range: 260 to 960 thousand), 4.7 million prevalent infections (range: 3.0 to 6.6 million), and 370 thousand adult deaths from AIDS (range: 266 to 492 thousand). CONCLUSION: While this project addresses some of the limitations of previous modelling efforts, an important research agenda remains, including the need to clarify the relationship between sentinel data from pregnant women and the epidemiology of HIV and AIDS in the general population. PMID:11477962
Capture-recapture analysis for estimating manatee reproductive rates
Kendall, W.L.; Langtimm, C.A.; Beck, C.A.; Runge, M.C.
2004-01-01
Modeling the life history of the endangered Florida manatee (Trichechus manatus latirostris) is an important step toward understanding its population dynamics and predicting its response to management actions. We developed a multi-state mark-resighting model for data collected under Pollock's robust design. This model estimates breeding probability conditional on a female's breeding state in the previous year; assumes sighting probability depends on breeding state; and corrects for misclassification of a cow with first-year calf, by estimating conditional sighting probability for the calf. The model is also appropriate for estimating survival and unconditional breeding probabilities when the study area is closed to temporary emigration across years. We applied this model to photo-identification data for the Northwest and Atlantic Coast populations of manatees, for years 1982?2000. With rare exceptions, manatees do not reproduce in two consecutive years. For those without a first-year calf in the previous year, the best-fitting model included constant probabilities of producing a calf for the Northwest (0.43, SE = 0.057) and Atlantic (0.38, SE = 0.045) populations. The approach we present to adjust for misclassification of breeding state could be applicable to a large number of marine mammal populations.
NASA Astrophysics Data System (ADS)
Kotsuki, Shunji; Terasaki, Koji; Yashiro, Hasashi; Tomita, Hirofumi; Satoh, Masaki; Miyoshi, Takemasa
2017-04-01
This study aims to improve precipitation forecasts from numerical weather prediction (NWP) models through effective use of satellite-derived precipitation data. Kotsuki et al. (2016, JGR-A) successfully improved the precipitation forecasts by assimilating the Japan Aerospace eXploration Agency (JAXA)'s Global Satellite Mapping of Precipitation (GSMaP) data into the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) at 112-km horizontal resolution. Kotsuki et al. mitigated the non-Gaussianity of the precipitation variables by the Gaussian transform method for observed and forecasted precipitation using the previous 30-day precipitation data. This study extends the previous study by Kotsuki et al. and explores an online estimation of model parameters using ensemble data assimilation. We choose two globally-uniform parameters, one is the cloud-to-rain auto-conversion parameter of the Berry's scheme for large scale condensation and the other is the relative humidity threshold of the Arakawa-Schubert cumulus parameterization scheme. We perform the online-estimation of the two model parameters with an ensemble transform Kalman filter by assimilating the GSMaP precipitation data. The estimated parameters improve the analyzed and forecasted mixing ratio in the lower troposphere. Therefore, the parameter estimation would be a useful technique to improve the NWP models and their forecasts. This presentation will include the most recent progress up to the time of the symposium.
Canopy Level Emissions of 2-methyl-3-buten-2-ol ...
Emissions of biogenic volatile organic compounds (BVOC) observed during 2007 from a Pinus taeda experimental plantation in Central North Carolina are compared with model estimates from MEGAN 2.1. Relaxed Eddy Accumulation (REA) estimates of 2-methyl-3-buten-2-ol (MBO) fluxes are a factor of 3-4 higher than model estimates. MEGAN2.1 monoterpene emission estimates were approximately a factor of two higher than REA flux measurements. MEGAN2.1 β-caryophyllene emission estimates were within 60% of growing season REA flux estimates, but were several times higher than REA fluxes during cooler, dormant season periods. The sum of other sesquiterpene emissions estimated by MEGAN2.1 was several times higher than REA estimates throughout the year. Model components are examined to understand these discrepancies. Summertime LAI (and therefore foliar biomass) is a factor of two higher than assumed in MEGAN for the Pinus taeda default. Increasing the canopy mean MBO emission factor from 0.35 to 1.0 mg m-2 hr-1 also reduces MEGAN2.1 vs flux differences. This increase is within current emission factor uncertainties. The algorithm within MEGAN which adjusts isoprene emission estimates as a function of the previous 24 hour’s temperatures and light seems to also improve seasonal MEGAN MBO correlation with REA fluxes. Including the effects of the previous 240 hours, however, seems to degrade temporal model correlation with fluxes. This paper describes an emission inventory and mod
The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method
2015-01-01
Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838
Performance and Weight Estimates for an Advanced Open Rotor Engine
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Tong, Michael T.
2012-01-01
NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.
Kery, M.; Gregg, K.B.
2003-01-01
1. Most plant demographic studies follow marked individuals in permanent plots. Plots tend to be small, so detectability is assumed to be one for every individual. However, detectability could be affected by factors such as plant traits, time, space, observer, previous detection, biotic interactions, and especially by life-state. 2. We used a double-observer survey and closed population capture-recapture modelling to estimate state-specific detectability of the orchid Cleistes bifaria in a long-term study plot of 41.2 m2. Based on AICc model selection, detectability was different for each life-state and for tagged vs. previously untagged plants. There were no differences in detectability between the two observers. 3. Detectability estimates (SE) for one-leaf vegetative, two-leaf vegetative, and flowering/fruiting states correlated with mean size of these states and were 0.76 (0.05), 0.92 (0.06), and 1 (0.00), respectively, for previously tagged plants, and 0.84 (0.08), 0.75 (0.22), and 0 (0.00), respectively, for previously untagged plants. (We had insufficient data to obtain a satisfactory estimate of previously untagged flowering plants). 4. Our estimates are for a medium-sized plant in a small and intensively surveyed plot. It is possible that detectability is even lower for larger plots and smaller plants or smaller life-states (e.g. seedlings) and that detectabilities < 1 are widespread in plant demographic studies. 5. State-dependent detectabilities are especially worrying since they will lead to a size- or state-biased sample from the study plot. Failure to incorporate detectability into demographic estimation methods introduces a bias into most estimates of population parameters such as fecundity, recruitment, mortality, and transition rates between life-states. We illustrate this by a simple example using a matrix model, where a hypothetical population was stable but, due to imperfect detection, wrongly projected to be declining at a rate of 8% per year. 6. Almost all plant demographic studies are based on models for discrete states. State and size are important predictors both for demographic rates and detectability. We suggest that even in studies based on small plots, state- or size-specific detectability should be estimated at least at some point to avoid biased inference about the dynamics of the population sampled.
Chen, Liang-Hsuan; Hsueh, Chan-Ching
2007-06-01
Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
ERIC Educational Resources Information Center
Karkee, Thakur B.; Wright, Karen R.
2004-01-01
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model
NASA Technical Reports Server (NTRS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Camp, J. B.;
2016-01-01
This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35(+5)(-3) solar M; and 30(+3)(-4) solar M; (where errors correspond to 90 symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate is less than 0.65 and a secondary spin estimate is less than 0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.
NASA Astrophysics Data System (ADS)
Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi
2014-01-01
The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda
2018-02-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng
2018-01-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857
A quantile count model of water depth constraints on Cape Sable seaside sparrows
Cade, B.S.; Dong, Q.
2008-01-01
1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.
A bicycle safety index for evaluating urban street facilities.
Asadi-Shekari, Zohreh; Moeinaddini, Mehdi; Zaly Shah, Muhammad
2015-01-01
The objectives of this research are to conceptualize the Bicycle Safety Index (BSI) that considers all parts of the street and to propose a universal guideline with microscale details. A point system method comparing existing safety facilities to a defined standard is proposed to estimate the BSI. Two streets in Singapore and Malaysia are chosen to examine this model. The majority of previous measurements to evaluate street conditions for cyclists usually cannot cover all parts of streets, including segments and intersections. Previous models also did not consider all safety indicators and cycling facilities at a microlevel in particular. This study introduces a new concept of a practical BSI to complete previous studies using its practical, easy-to-follow, point system-based outputs. This practical model can be used in different urban settings to estimate the level of safety for cycling and suggest some improvements based on the standards.
Model-assisted forest yield estimation with light detection and ranging
Jacob L. Strunk; Stephen E. Reutebuch; Hans-Erik Andersen; Peter J. Gould; Robert J. McGaughey
2012-01-01
Previous studies have demonstrated that light detection and ranging (LiDAR)-derived variables can be used to model forest yield variables, such as biomass, volume, and number of stems. However, the next step is underrepresented in the literature: estimation of forest yield with appropriate confidence intervals. It is of great importance that the procedures required for...
NASA Astrophysics Data System (ADS)
Oakley, David O. S.; Fisher, Donald M.; Gardner, Thomas W.; Stewart, Mary Kate
2018-01-01
Marine terraces on growing fault-propagation folds provide valuable insight into the relationship between fold kinematics and uplift rates, providing a means to distinguish among otherwise non-unique kinematic model solutions. Here, we investigate this relationship at two locations in North Canterbury, New Zealand: the Kate anticline and Haumuri Bluff, at the northern end of the Hawkswood anticline. At both locations, we calculate uplift rates of previously dated marine terraces, using DGPS surveys to estimate terrace inner edge elevations. We then use Markov chain Monte Carlo methods to fit fault-propagation fold kinematic models to structural geologic data, and we incorporate marine terrace uplift into the models as an additional constraint. At Haumuri Bluff, we find that marine terraces, when restored to originally horizontal surfaces, can help to eliminate certain trishear models that would fit the geologic data alone. At Kate anticline, we compare uplift rates at different structural positions and find that the spatial pattern of uplift rates is more consistent with trishear than with a parallel-fault propagation fold kink-band model. Finally, we use our model results to compute new estimates for fault slip rates ( 1-2 m/ka at Kate anticline and 1-4 m/ka at Haumuri Bluff) and ages of the folds ( 1 Ma), which are consistent with previous estimates for the onset of folding in this region. These results are consistent with previous work on the age of onset of folding in this region, provide revised estimates of fault slip rates necessary to understand the seismic hazard posed by these faults, and demonstrate the value of incorporating marine terraces in inverse fold kinematic models as a means to distinguish among non-unique solutions.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
"Hot Spots" of Land Atmosphere Coupling
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Dirmeyer, Paul A.; Guo, Zhi-Chang; Bonan, Gordan; Chan, Edmond; Cox, Peter; Gordon, T. C.; Kanae, Shinjiro; Kowalczyk, Eva; Lawrence, David
2004-01-01
Previous estimates of land-atmosphere interaction (the impact of soil moisture on precipitation) have been limited by a severe paucity of relevant observational data and by the model-dependence of the various computational estimates. To counter this limitation, a dozen climate modeling groups have recently performed the same highly-controlled numerical experiment as part of a coordinated intercomparison project. This allows, for the first time ever, a superior multi-model approach to the estimation of the regions on the globe where precipitation is affected by soil moisture anomalies during Northern Hemisphere summer. Such estimation has many potential benefits; it can contribute, for example, to seasonal rainfall prediction efforts.
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
Henriksson, Mikael; Corino, Valentina D A; Sornmo, Leif; Sandberg, Frida
2016-09-01
The atrioventricular (AV) node plays a central role in atrial fibrillation (AF), as it influences the conduction of impulses from the atria into the ventricles. In this paper, the statistical dual pathway AV node model, previously introduced by us, is modified so that it accounts for atrial impulse pathway switching even if the preceding impulse did not cause a ventricular activation. The proposed change in model structure implies that the number of model parameters subjected to maximum likelihood estimation is reduced from five to four. The model is evaluated using the data acquired in the RATe control in atrial fibrillation (RATAF) study, involving 24-h ECG recordings from 60 patients with permanent AF. When fitting the models to the RATAF database, similar results were obtained for both the present and the previous model, with a median fit of 86%. The results show that the parameter estimates characterizing refractory period prolongation exhibit considerably lower variation when using the present model, a finding that may be ascribed to fewer model parameters. The new model maintains the capability to model RR intervals, while providing more reliable parameters estimates. The model parameters are expected to convey novel clinical information, and may be useful for predicting the effect of rate control drugs.
NASA Astrophysics Data System (ADS)
Mattern, Nancy Page Garland
Four causal models describing the relationships between attitudes and achievement have been proposed in the literature. The cross-effects, or reciprocal effects, model highlights the effects of prior attitudes on later achievement (over and above the effect of previous achievement) and of prior achievement on later attitudes (above the effect of previous attitudes). In the achievement predominant model, the effect of prior achievement on later attitudes is emphasized, controlling for the effect of previous attitudes. The effect of prior attitudes on later achievement, controlling for the effect of previous achievement, is emphasized in the attitudes predominant model. In the no cross-effects model there are no significant cross paths from prior attitudes to later achievement or from prior achievement to later attitudes. To determine the best-fitting model for rural seventh and eighth grade science girls and boys, the causal relationships over time between attitudes toward science and achievement in science were examined by gender using structural equation modeling. Data were collected in two waves, over one school year. A baseline measurement model was estimated in simultaneous two-group solutions and was a good fit to the data. Next, the four structural models were estimated and model fits compared. The three models nested within the structural cross-effects model showed significant decay of fit when compared to the fit of the cross-effects model. The cross-effects model was the best fit overall for middle school girls and boys. The cross-effects model was then tested for invariance across gender. There was significant decay of fit when model form, factor path loadings, and structural paths were constrained to be equal for girls and boys. Two structural paths, the path from prior achievement to later attitudes, and the path from prior attitudes to later attitudes, were the sources of gender non-invariance. Separate models were estimated for girls and boys, and the fits of nested models were compared. The no cross-effects model was the best-fitting model for rural middle school girls. The new no attitudes-path model was the best-fitting model for boys. Implications of these findings for teaching middle school students were discussed.
Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford
2015-06-01
A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Yield estimation of sugarcane based on agrometeorological-spectral models
NASA Technical Reports Server (NTRS)
Rudorff, Bernardo Friedrich Theodor; Batista, Getulio Teixeira
1990-01-01
This work has the objective to assess the performance of a yield estimation model for sugarcane (Succharum officinarum). The model uses orbital gathered spectral data along with yield estimated from an agrometeorological model. The test site includes the sugarcane plantations of the Barra Grande Plant located in Lencois Paulista municipality in Sao Paulo State. Production data of four crop years were analyzed. Yield data observed in the first crop year (1983/84) were regressed against spectral and agrometeorological data of that same year. This provided the model to predict the yield for the following crop year i.e., 1984/85. The model to predict the yield of subsequent years (up to 1987/88) were developed similarly, incorporating all previous years data. The yield estimations obtained from these models explained 69, 54, and 50 percent of the yield variation in the 1984/85, 1985/86, and 1986/87 crop years, respectively. The accuracy of yield estimations based on spectral data only (vegetation index model) and on agrometeorological data only (agrometeorological model) were also investigated.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263
A historical reconstruction of ships' fuel consumption and emissions
NASA Astrophysics Data System (ADS)
Endresen, Øyvind; Sørgârd, Eirik; Behrens, Hanna Lee; Brett, Per Olaf; Isaksen, Ivar S. A.
2007-06-01
Shipping activity has increased considerably over the last century and currently represents a significant contribution to the global emissions of pollutants and greenhouse gases. Despite this, information about the historical development of fuel consumption and emissions is generally limited, with little data published pre-1950 and large deviations reported for estimates covering the last 3 decades. To better understand the historical development in ship emissions and the uncertainties associated with the estimates, we present fuel-based CO2 and SO2 emission inventories from 1925 up to 2002 and activity-based estimates from 1970 up to 2000. The global CO2 emissions from ships in 1925 have been estimated to 229 Tg (CO2), growing to about 634 Tg (CO2) in 2002. The corresponding SO2 emissions are about 2.5 Tg (SO2) and 8.5 Tg (SO2), respectively. Our activity-based estimates of fuel consumption from 1970 to 2000, covering all oceangoing civil ships above or equal to 100 gross tonnage (GT), are lower compared to previous activity-based studies. We have applied a more detailed model approach, which includes variation in the demand for sea transport, as well as operational and technological changes of the past. This study concludes that the main reason for the large deviations found in reported inventories is the applied number of days at sea. Moreover, our modeling indicates that the ship size and the degree of utilization of the fleet, combined with the shift to diesel engines, have been the major factors determining yearly fuel consumption. Interestingly, the model results from around 1973 suggest that the fleet growth is not necessarily followed by increased fuel consumption, as technical and operational characteristics have changed. Results from this study indicate that reported sales over the last 3 decades seems not to be significantly underreported as previous simplified activity-based studies have suggested. The results confirm our previously reported modeling estimates for year 2000. Previous activity-based studies have not considered ships less than 100 GT (e.g., today some 1.3 million fishing vessels), and we suggest that this fleet could account for an important part of the total fuel consumption (˜10%).
Full velocity difference model for a car-following theory.
Jiang, R; Wu, Q; Zhu, Z
2001-07-01
In this paper, we present a full velocity difference model for a car-following theory based on the previous models in the literature. To our knowledge, the model is an improvement over the previous ones theoretically, because it considers more aspects in car-following process than others. This point is verified by numerical simulation. Then we investigate the property of the model using both analytic and numerical methods, and find that the model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion.
Hierarchical Bayes estimation of species richness and occupancy in spatially replicated surveys
Kery, M.; Royle, J. Andrew
2008-01-01
1. Species richness is the most widely used biodiversity metric, but cannot be observed directly as, typically, some species are overlooked. Imperfect detectability must therefore be accounted for to obtain unbiased species-richness estimates. When richness is assessed at multiple sites, two approaches can be used to estimate species richness: either estimating for each site separately, or pooling all samples. The first approach produces imprecise estimates, while the second loses site-specific information. 2. In contrast, a hierarchical Bayes (HB) multispecies site-occupancy model benefits from the combination of information across sites without losing site-specific information and also yields occupancy estimates for each species. The heart of the model is an estimate of the incompletely observed presence-absence matrix, a centrepiece of biogeography and monitoring studies. We illustrate the model using Swiss breeding bird survey data, and compare its estimates with the widely used jackknife species-richness estimator and raw species counts. 3. Two independent observers each conducted three surveys in 26 1-km(2) quadrats, and detected 27-56 (total 103) species. The average estimated proportion of species detected after three surveys was 0.87 under the HB model. Jackknife estimates were less precise (less repeatable between observers) than raw counts, but HB estimates were as repeatable as raw counts. The combination of information in the HB model thus resulted in species-richness estimates presumably at least as unbiased as previous approaches that correct for detectability, but without costs in precision relative to uncorrected, biased species counts. 4. Total species richness in the entire region sampled was estimated at 113.1 (CI 106-123); species detectability ranged from 0.08 to 0.99, illustrating very heterogeneous species detectability; and species occupancy was 0.06-0.96. Even after six surveys, absolute bias in observed occupancy was estimated at up to 0.40. 5. Synthesis and applications. The HB model for species-richness estimation combines information across sites and enjoys more precise, and presumably less biased, estimates than previous approaches. It also yields estimates of several measures of community size and composition. Covariates for occupancy and detectability can be included. We believe it has considerable potential for monitoring programmes as well as in biogeography and community ecology.
NASA Astrophysics Data System (ADS)
Tierney, Craig Cristy
Presented here are several investigations of ocean tides derived from TOPEX/POSEIDON (T/P) altimetry and numerical models. The purpose of these investigations is to study the short wavelength features in the T/P data and to preserve these wavelengths in global ocean tide models that are accurate in shallow and deep waters. With these new estimates, effects of the tides on loading, Earth's rotation, and tidal energetics are studied. To preserve tidal structure, tides have been estimated along the ground track of T/P by the harmonic and response methods using 4.5 years of data. Results show the two along-track (AT) estimates agree with each other and with other tide models for those components with minimal aliasing problems. Comparisons to global models show that there is tidal structure in the T/P data that is not preserved with current gridding methods. Error estimates suggest there is accurate information in the T/P data from shallow waters that can be used to improve tidal models. It has been shown by Ray and Mitchum (1996) that the first mode baroclinic tide can be separated from AT tide estimates by filtering. This method has been used to estimate the first mode semidiurnal baroclinic tides globally. Estimates for M2 show good correlation with known regions of baroclinic tide generation. Using gridded, filtered AT estimates, a lower bound on the energy contained in the M2 baroclinic tide is 50 PJ. Inspired by the structure found in the AT estimates, a gridding method is presented that preserves tidal structure in the T/P data. These estimates are assimilated into a nonlinear, finite difference, global barotropic tidal model. Results from the 8 major tidal constituents show the model performs equivalently to other models in the deep waters, and is significantly better in the shallow waters. Crossover variance is reduced from 14 cm to 10 cm in the shallow waters. Comparisons to Earth rotation show good agreement to results from VLBI data. Tidal energetics computed from the models show good agreement with previous results. PE/KE ratios and quality factors are more consistent in each frequency band than in previous results.
NASA Astrophysics Data System (ADS)
Winter, A. J. B.
1980-10-01
Previous estimates of wave energy around the United Kingdom have been made by extrapolating measurements from a few sites to the whole UK seaboard. Here directional wave spectra are used from a numerical wave model developed by the Meteorological Office to make estimates which are verified where possible by observation. It is concluded that around 30 GW of power is available for capture by wave energy converters: when estimates of converter spacing and efficiency are considered an average of about 7 GW of electrical power could be supplied. This resource estimate is smaller than previous ones, though consistent with them when factors such as the directional properties of waves and the likelihood that converters will be sited near coasts are included.
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
Should I use that model? Assessing the transferability of ecological models to new settings
Analysts and scientists frequently apply existing models that estimate ecological endpoints or simulate ecological processes to settings where the models have not been used previously, and where data to parameterize and validate the model may be sparse. Prior to transferring an ...
Mathematical modeling improves EC50 estimations from classical dose-response curves.
Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar
2015-03-01
The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.
Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi
2008-08-01
(137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian site of the Three Gorge Region, China are estimated by the two models. The over-estimation of the soil loss by using the previous simple profile-shape model obviously increases with the time period from the sampling year to the year of 1963 and (137)Cs loss proportion of the reference inventory. As to 20-80% of (137)Cs loss proportions of the reference inventory at the Kaixian site in 2004, the annual soil loss depths estimated by the new simplified transport process model are only 57.90-56.24% of the values estimated by the previous model.
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
Aerosol Direct Radiative Effects and Heating in the New Era of Active Satellite Observations
NASA Astrophysics Data System (ADS)
Matus, Alexander V.
Atmospheric aerosols impact the global energy budget by scattering and absorbing solar radiation. Despite their impacts, aerosols remain a significant source of uncertainty in our ability to predict future climate. Multi-sensor observations from the A-Train satellite constellation provide valuable observational constraints necessary to reduce uncertainties in model simulations of aerosol direct effects. This study will discuss recent efforts to quantify aerosol direct effects globally and regionally using CloudSat's radiative fluxes and heating rates product. Improving upon previous techniques, this approach leverages the capability of CloudSat and CALIPSO to retrieve vertically resolved estimates of cloud and aerosol properties critical for accurately evaluating the radiative impacts of aerosols. We estimate the global annual mean aerosol direct effect to be -1.9 +/- 0.6 W/m2, which is in better agreement with previously published estimates from global models than previous satellite-based estimates. Detailed comparisons against a fully coupled simulation of the Community Earth System Model, however, reveal that this agreement on the global annual mean masks large regional discrepancies between modeled and observed estimates of aerosol direct effects related to model biases in cloud cover. A low bias in stratocumulus cloud cover over the southeastern Pacific Ocean, for example, leads to an overestimate of the radiative effects of marine aerosols. Stratocumulus clouds over the southeastern Atlantic Ocean can enhance aerosol absorption by 50% allowing aerosol layers to remain self-lofted in an area of subsidence. Aerosol heating is found to peak at 0.6 +/- 0.3 K/day an altitude of 4 km in September when biomass burning reaches a maximum. Finally, the contributions of observed aerosols components are evaluated to estimate the direct radiative forcing of anthropogenic aerosols. Aerosol forcing is computed using satellite-based radiative kernels that describe the sensitivity of shortwave fluxes in response to aerosol optical depth. The direct radiative forcing is estimated to be -0.21 W/m2 with the largest contributions from pollution that is partially offset by a positive forcing from smoke aerosols. The results from these analyses provide new benchmarks on the global radiative effects of aerosols and offer new insights for improving future assessments.
NASA Technical Reports Server (NTRS)
Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.
2017-01-01
We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.
Refining new-physics searches in B→Dτν with lattice QCD.
Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Meurice, Y; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran
2012-08-17
The semileptonic decay channel B→Dτν is sensitive to the presence of a scalar current, such as that mediated by a charged-Higgs boson. Recently, the BABAR experiment reported the first observation of the exclusive semileptonic decay B→Dτ(-)ν, finding an approximately 2σ disagreement with the standard-model prediction for the ratio R(D)=BR(B→Dτν)/BR(B→Dℓν), where ℓ = e,μ. We compute this ratio of branching fractions using hadronic form factors computed in unquenched lattice QCD and obtain R(D)=0.316(12)(7), where the errors are statistical and total systematic, respectively. This result is the first standard-model calculation of R(D) from ab initio full QCD. Its error is smaller than that of previous estimates, primarily due to the reduced uncertainty in the scalar form factor f(0)(q(2)). Our determination of R(D) is approximately 1σ higher than previous estimates and, thus, reduces the tension with experiment. We also compute R(D) in models with electrically charged scalar exchange, such as the type-II two-Higgs-doublet model. Once again, our result is consistent with, but approximately 1σ higher than, previous estimates for phenomenologically relevant values of the scalar coupling in the type-II model. As a by-product of our calculation, we also present the standard-model prediction for the longitudinal-polarization ratio P(L)(D)=0.325(4)(3).
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver L.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth scientists seeking to leverage the capabilities of multiple spacecraft working in support of a common goal.
PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.
Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude
2015-04-27
Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-07-01
This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-12-01
This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
NASA Astrophysics Data System (ADS)
Ito, A.; Akimoto, H.
2006-12-01
We estimate the emissions of black carbon (BC) from open vegetation fires in southern hemisphere Africa from 1998 to 2005 using satellite information in conjunction with a biogeochemical model. Monthly burned areas at a 0.5-degree resolution are estimated from the Visible and Infrared Scanner (VIRS) fire count product and the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area data set associated with the MODIS tree cover imagery in grasslands and woodlands. The monthly fuel load distribution is derived from a 0.5- degree terrestrial carbon cycle model in conjunction with satellite data. The monthly maps of combustion factor and emission factor are estimated using empirical models that predict the effects of fuel conditions on these factors in grasslands and woodlands. Our annual averaged BC emitted per unit area burned is 0.17 g BC m-2 which is consistent with the product of fuel consumption and emission factor typically measured in southern Africa. The BC emissions from open vegetation burning in southern Africa ranged from 0.26 Tg BC yr-1 for 2002 to 0.42 Tg BC yr-1 for 1998. The peak in BC emissions is identical to that from previous top-down estimate using the Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data. The sum of monthly emissions during the burning season in 2000 is in good agreement between our estimate (0.38 Tg) and previous estimate constrained by numerical model and measurements (0.47 Tg).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moerk, Anna-Karin, E-mail: anna-karin.mork@ki.s; Jonsson, Fredrik; Pharsight, a Certara company, St. Louis, MO
2009-11-01
The aim of this study was to derive improved estimates of population variability and uncertainty of physiologically based pharmacokinetic (PBPK) model parameters, especially of those related to the washin-washout behavior of polar volatile substances. This was done by optimizing a previously published washin-washout PBPK model for acetone in a Bayesian framework using Markov chain Monte Carlo simulation. The sensitivity of the model parameters was investigated by creating four different prior sets, where the uncertainty surrounding the population variability of the physiological model parameters was given values corresponding to coefficients of variation of 1%, 25%, 50%, and 100%, respectively. The PBPKmore » model was calibrated to toxicokinetic data from 2 previous studies where 18 volunteers were exposed to 250-550 ppm of acetone at various levels of workload. The updated PBPK model provided a good description of the concentrations in arterial, venous, and exhaled air. The precision of most of the model parameter estimates was improved. New information was particularly gained on the population distribution of the parameters governing the washin-washout effect. The results presented herein provide a good starting point to estimate the target dose of acetone in the working and general populations for risk assessment purposes.« less
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Streamlining the Design Tradespace for Earth Imaging Constellations
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Hughes, Steven P.; Le Moigne, Jacqueline J.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth scientists seeking to leverage the capabilities of multiple spacecraft working in support of a common goal.
A projection of lesser prairie chicken (Tympanuchus pallidicinctus) populations range-wide
Cummings, Jonathan W.; Converse, Sarah J.; Moore, Clinton T.; Smith, David R.; Nichols, Clay T.; Allan, Nathan L.; O'Meilia, Chris M.
2017-08-09
We built a population viability analysis (PVA) model to predict future population status of the lesser prairie-chicken (Tympanuchus pallidicinctus, LEPC) in four ecoregions across the species’ range. The model results will be used in the U.S. Fish and Wildlife Service's (FWS) Species Status Assessment (SSA) for the LEPC. Our stochastic projection model combined demographic rate estimates from previously published literature with demographic rate estimates that integrate the influence of climate conditions. This LEPC PVA projects declining populations with estimated population growth rates well below 1 in each ecoregion regardless of habitat or climate change. These results are consistent with estimates of LEPC population growth rates derived from other demographic process models. Although the absolute magnitude of the decline is unlikely to be as low as modeling tools indicate, several different lines of evidence suggest LEPC populations are declining.
A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers
Yochum, Steven E.
2000-01-01
The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.
Chirico, Peter G.; Malpeli, Katherine C.; Moran, Thomas W.
2013-01-01
This study is a reconnaissance assessment of the alluvial gold deposits of the North Takhar Area of Interest (AOI) in Takhar Province, Afghanistan. Soviet and Afghan geologists collected data and calculated the gold deposit reserves in Takhar Province in the 1970s, prior to the development of satellite-based remote-sensing platforms and new methods of geomorphic mapping. The purpose of this study was to integrate new mapping techniques with previously collected borehole sampling and concentration sampling data and geomorphologic interpretations to reassess the alluvial gold placer deposits in the North Takhar AOI. Through a combination of historical borehole and cross-section data and digital terrain modeling, the Samti, Nooraba-Khasar-Anjir, and Kocha River placer deposits were reassessed. Resource estimates were calculated to be 20,927 kilograms (kg) for Samti, 7,626 kg for Nooraba-Khasar-Anjir, 160 kg for the mouth of the Kocha, 1,047 kg for the lower Kocha, 113 kg for the middle Kocha, and 168 kg for the upper Kocha. Previous resource estimates conducted by the Soviets for the Samti and Nooraba-Khasar-Anjir deposits estimated 30,062 kg and 802 kg of gold, respectively. This difference between the new estimates and previous estimates results from the higher resolution geomorphic model and the interpretation of areas outside of the initial work zone studied by Soviet and Afghan geologists.
Properties of added variable plots in Cox's regression model.
Lindkvist, M
2000-03-01
The added variable plot is useful for examining the effect of a covariate in regression models. The plot provides information regarding the inclusion of a covariate, and is useful in identifying influential observations on the parameter estimates. Hall et al. (1996) proposed a plot for Cox's proportional hazards model derived by regarding the Cox model as a generalized linear model. This paper proves and discusses properties of this plot. These properties make the plot a valuable tool in model evaluation. Quantities considered include parameter estimates, residuals, leverage, case influence measures and correspondence to previously proposed residuals and diagnostics.
NASA Astrophysics Data System (ADS)
Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed
2017-01-01
This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.
Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images
NASA Astrophysics Data System (ADS)
Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory
2017-03-01
Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles
2016-01-01
Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Number of discernible object colors is a conundrum.
Masaoka, Kenichiro; Berns, Roy S; Fairchild, Mark D; Moghareh Abed, Farhad
2013-02-01
Widely varying estimates of the number of discernible object colors have been made by using various methods over the past 100 years. To clarify the source of the discrepancies in the previous, inconsistent estimates, the number of discernible object colors is estimated over a wide range of color temperatures and illuminance levels using several chromatic adaptation models, color spaces, and color difference limens. Efficient and accurate models are used to compute optimal-color solids and count the number of discernible colors. A comprehensive simulation reveals limitations in the ability of current color appearance models to estimate the number of discernible colors even if the color solid is smaller than the optimal-color solid. The estimates depend on the color appearance model, color space, and color difference limen used. The fundamental problem lies in the von Kries-type chromatic adaptation transforms, which have an unknown effect on the ranking of the number of discernible colors at different color temperatures.
Modelling Continuing Load at Disaggregated Levels
ERIC Educational Resources Information Center
Seidel, Ewa
2014-01-01
The current methodology of estimating load in the following year at Flinders University has achieved reasonable accuracy in the previous capped funding environment, particularly at the university level, due largely to our university having stable intakes and student profiles. While historically within reasonable limits, variation in estimates at…
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
Just, Allan C; Wright, Robert O; Schwartz, Joel; Coull, Brent A; Baccarelli, Andrea A; Tellez-Rojo, Martha María; Moody, Emily; Wang, Yujie; Lyapustin, Alexei; Kloog, Itai
2015-07-21
Recent advances in estimating fine particle (PM2.5) ambient concentrations use daily satellite measurements of aerosol optical depth (AOD) for spatially and temporally resolved exposure estimates. Mexico City is a dense megacity that differs from other previously modeled regions in several ways: it has bright land surfaces, a distinctive climatological cycle, and an elevated semi-enclosed air basin with a unique planetary boundary layer dynamic. We extend our previous satellite methodology to the Mexico City area, a region with higher PM2.5 than most U.S. and European urban areas. Using a novel 1 km resolution AOD product from the MODIS instrument, we constructed daily predictions across the greater Mexico City area for 2004-2014. We calibrated the association of AOD to PM2.5 daily using municipal ground monitors, land use, and meteorological features. Predictions used spatial and temporal smoothing to estimate AOD when satellite data were missing. Our model performed well, resulting in an out-of-sample cross-validation R(2) of 0.724. Cross-validated root-mean-squared prediction error (RMSPE) of the model was 5.55 μg/m(3). This novel model reconstructs long- and short-term spatially resolved exposure to PM2.5 for epidemiological studies in Mexico City.
Just, Allan C.; Wright, Robert O.; Schwartz, Joel; Coull, Brent A.; Baccarelli, Andrea A.; Tellez-Rojo, Martha María; Moody, Emily; Wang, Yujie; Lyapustin, Alexei; Kloog, Itai
2015-01-01
Recent advances in estimating fine particle (PM2.5) ambient concentrations use daily satellite measurements of aerosol optical depth (AOD) for spatially and temporally resolved exposure estimates. Mexico City is a dense megacity that differs from other previously modeled regions in several ways: it has bright land surfaces, a distinctive climatological cycle, and an elevated semi-enclosed air basin with a unique planetary boundary layer dynamic. We extend our previous satellite methodology to the Mexico City area, a region with higher PM2.5 than most US and European urban areas. Using a novel 1 km resolution AOD product from the MODIS instrument, we constructed daily predictions across the greater Mexico City area for 2004–2014. We calibrated the association of AOD to PM2.5 daily using municipal ground monitors, land use, and meteorological features. Predictions used spatial and temporal smoothing to estimate AOD when satellite data were missing. Our model performed well, resulting in an out-of-sample cross validation R2 of 0.724. Cross-validated root mean squared prediction error (RMSPE) of the model was 5.55 μg/m3. This novel model reconstructs long- and short-term spatially resolved exposure to PM2.5 for epidemiological studies in Mexico City. PMID:26061488
MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...
A Coalescent-Based Estimator of Admixture From DNA Sequences
Wang, Jinliang
2006-01-01
A variety of estimators have been developed to use genetic marker information in inferring the admixture proportions (parental contributions) of a hybrid population. The majority of these estimators used allele frequency data, ignored molecular information that is available in markers such as microsatellites and DNA sequences, and assumed that mutations are absent since the admixture event. As a result, these estimators may fail to deliver an estimate or give rather poor estimates when admixture is ancient and thus mutations are not negligible. A previous molecular estimator based its inference of admixture proportions on the average coalescent times between pairs of genes taken from within and between populations. In this article I propose an estimator that considers the entire genealogy of all of the sampled genes and infers admixture proportions from the numbers of segregating sites in DNA sequence samples. By considering the genealogy of all sequences rather than pairs of sequences, this new estimator also allows the joint estimation of other interesting parameters in the admixture model, such as admixture time, divergence time, population size, and mutation rate. Comparative analyses of simulated data indicate that the new coalescent estimator generally yields better estimates of admixture proportions than the previous molecular estimator, especially when the parental populations are not highly differentiated. It also gives reasonably accurate estimates of other admixture parameters. A human mtDNA sequence data set was analyzed to demonstrate the method, and the analysis results are discussed and compared with those from previous studies. PMID:16624918
NASA Astrophysics Data System (ADS)
Umehara, Hiroaki; Okada, Masato; Naruse, Yasushi
2018-03-01
The estimation of angular time series data is a widespread issue relating to various situations involving rotational motion and moving objects. There are two kinds of problem settings: the estimation of wrapped angles, which are principal values in a circular coordinate system (e.g., the direction of an object), and the estimation of unwrapped angles in an unbounded coordinate system such as for the positioning and tracking of moving objects measured by the signal-wave phase. Wrapped angles have been estimated in previous studies by sequential Bayesian filtering; however, the hyperparameters that are to be solved and that control the properties of the estimation model were given a priori. The present study establishes a procedure of hyperparameter estimation from the observation data of angles only, using the framework of Bayesian inference completely as the maximum likelihood estimation. Moreover, the filter model is modified to estimate the unwrapped angles. It is proved that without noise our model reduces to the existing algorithm of Itoh's unwrapping transform. It is numerically confirmed that our model is an extension of unwrapping estimation from Itoh's unwrapping transform to the case with noise.
Model based estimation of image depth and displacement
NASA Technical Reports Server (NTRS)
Damour, Kevin T.
1992-01-01
Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.
Emissions from ships in the northwestern United States.
Corbett, James J
2002-03-15
Recent inventory efforts have focused on developing nonroad inventories for emissions modeling and policy insights. Characterizing these inventories geographically and explicitly treating the uncertaintiesthat result from limited emissions testing, incomplete activity and usage data, and other important input parameters currently pose the largest methodological challenges. This paper presents a commercial marine vessel (CMV) emissions inventory for Washington and Oregon using detailed statistics regarding fuel consumption, vessel movements, and cargo volumes for the Columbia and Snake River systems. The inventory estimates emissions for oxides of nitrogen (NOx), particulate matter (PM), and oxides of sulfur (SOx). This analysis estimates that annual NOx emissions from marine transportation in the Columbia and Snake River systems in Washington and Oregon equal 6900 t of NOx (as NO2) per year, 2.6 times greater than previous NO, inventories for this region. Statewide CMV NO, emissions are estimated to be 9,800 t of NOx per year. By relying on a "bottom-up" fuel consumption model that includes vessel characteristics and transit information, the river system inventory may be more accurate than previous estimates. This inventory provides modelers with bounded parametric inputs for sensitivity analysis in pollution modeling. The ability to parametrically model the uncertainty in commercial marine vessel inventories also will help policy-makers determine whether better policy decisions can be enabled through further vessel testing and improved inventory resolution.
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Hisano, Mizue; Connolly, Sean R; Robbins, William D
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.
Hisano, Mizue; Connolly, Sean R.; Robbins, William D.
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Decision-Making Accuracy of CBM Progress-Monitoring Data
ERIC Educational Resources Information Center
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-05-18
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.
Benefits of detailed models of muscle activation and mechanics
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.
Andersen, S T; Erichsen, A C; Mark, O; Albrechtsen, H-J
2013-12-01
Quantitative microbial risk assessments (QMRAs) often lack data on water quality leading to great uncertainty in the QMRA because of the many assumptions. The quantity of waste water contamination was estimated and included in a QMRA on an extreme rain event leading to combined sewer overflow (CSO) to bathing water where an ironman competition later took place. Two dynamic models, (1) a drainage model and (2) a 3D hydrodynamic model, estimated the dilution of waste water from source to recipient. The drainage model estimated that 2.6% of waste water was left in the system before CSO and the hydrodynamic model estimated that 4.8% of the recipient bathing water came from the CSO, so on average there was 0.13% of waste water in the bathing water during the ironman competition. The total estimated incidence rate from a conservative estimate of the pathogenic load of five reference pathogens was 42%, comparable to 55% in an epidemiological study of the case. The combination of applying dynamic models and exposure data led to an improved QMRA that included an estimate of the dilution factor. This approach has not been described previously.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797
Estimating current modal splits.
DOT National Transportation Integrated Search
2005-11-01
This project is the second part in a two part modeling effort. In previous work*, mode : choice was modeled by examining characteristics of individuals and the trips they make. : A study of the choices of individuals is necessary for a fundamental un...
Inverse modeling of Asian (222)Rn flux using surface air (222)Rn concentration.
Hirao, Shigekazu; Yamazawa, Hiromi; Moriizumi, Jun
2010-11-01
When used with an atmospheric transport model, the (222)Rn flux distribution estimated in our previous study using soil transport theory caused underestimation of atmospheric (222)Rn concentrations as compared with measurements in East Asia. In this study, we applied a Bayesian synthesis inverse method to produce revised estimates of the annual (222)Rn flux density in Asia by using atmospheric (222)Rn concentrations measured at seven sites in East Asia. The Bayesian synthesis inverse method requires a prior estimate of the flux distribution and its uncertainties. The atmospheric transport model MM5/HIRAT and our previous estimate of the (222)Rn flux distribution as the prior value were used to generate new flux estimates for the eastern half of the Eurasian continent dividing into 10 regions. The (222)Rn flux densities estimated using the Bayesian inversion technique were generally higher than the prior flux densities. The area-weighted average (222)Rn flux density for Asia was estimated to be 33.0 mBq m(-2) s(-1), which is substantially higher than the prior value (16.7 mBq m(-2) s(-1)). The estimated (222)Rn flux densities decrease with increasing latitude as follows: Southeast Asia (36.7 mBq m(-2) s(-1)); East Asia (28.6 mBq m(-2) s(-1)) including China, Korean Peninsula and Japan; and Siberia (14.1 mBq m(-2) s(-1)). Increase of the newly estimated fluxes in Southeast Asia, China, Japan, and the southern part of Eastern Siberia from the prior ones contributed most significantly to improved agreement of the model-calculated concentrations with the atmospheric measurements. The sensitivity analysis of prior flux errors and effects of locally exhaled (222)Rn showed that the estimated fluxes in Northern and Central China, Korea, Japan, and the southern part of Eastern Siberia were robust, but that in Central Asia had a large uncertainty.
Estimating neural response functions from fMRI
Kumar, Sukhbinder; Penny, William
2014-01-01
This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246
Age-specific survival of male golden-cheeked warblers on the Fort Hood Military Reservation, Texas
Duarte, Adam; Hines, James E.; Nichols, James D.; Hatfield, Jeffrey S.; Weckerly, Floyd W.
2014-01-01
Population models are essential components of large-scale conservation and management plans for the federally endangered Golden-cheeked Warbler (Setophaga chrysoparia; hereafter GCWA). However, existing models are based on vital rate estimates calculated using relatively small data sets that are now more than a decade old. We estimated more current, precise adult and juvenile apparent survival (Φ) probabilities and their associated variances for male GCWAs. In addition to providing estimates for use in population modeling, we tested hypotheses about spatial and temporal variation in Φ. We assessed whether a linear trend in Φ or a change in the overall mean Φ corresponded to an observed increase in GCWA abundance during 1992-2000 and if Φ varied among study plots. To accomplish these objectives, we analyzed long-term GCWA capture-resight data from 1992 through 2011, collected across seven study plots on the Fort Hood Military Reservation using a Cormack-Jolly-Seber model structure within program MARK. We also estimated Φ process and sampling variances using a variance-components approach. Our results did not provide evidence of site-specific variation in adult Φ on the installation. Because of a lack of data, we could not assess whether juvenile Φ varied spatially. We did not detect a strong temporal association between GCWA abundance and Φ. Mean estimates of Φ for adult and juvenile male GCWAs for all years analyzed were 0.47 with a process variance of 0.0120 and a sampling variance of 0.0113 and 0.28 with a process variance of 0.0076 and a sampling variance of 0.0149, respectively. Although juvenile Φ did not differ greatly from previous estimates, our adult Φ estimate suggests previous GCWA population models were overly optimistic with respect to adult survival. These updated Φ probabilities and their associated variances will be incorporated into new population models to assist with GCWA conservation decision making.
Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J
2008-01-01
Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358
Finely Resolved On-Road PM2.5 and Estimated Premature Mortality in Central North Carolina.
Chang, Shih Ying; Vizuete, William; Serre, Marc; Vennam, Lakshmi Pradeepa; Omary, Mohammad; Isakov, Vlad; Breen, Michael; Arunachalam, Saravanan
2017-12-01
To quantify the on-road PM 2.5 -related premature mortality at a national scale, previous approaches to estimate concentrations at a 12-km × 12-km or larger grid cell resolution may not fully characterize concentration hotspots that occur near roadways and thus the areas of highest risk. Spatially resolved concentration estimates from on-road emissions to capture these hotspots may improve characterization of the associated risk, but are rarely used for estimating premature mortality. In this study, we compared the on-road PM 2.5 -related premature mortality in central North Carolina with two different concentration estimation approaches-(i) using the Community Multiscale Air Quality (CMAQ) model to model concentration at a coarser resolution of a 36-km × 36-km grid resolution, and (ii) using a hybrid of a Gaussian dispersion model, CMAQ, and a space-time interpolation technique to provide annual average PM 2.5 concentrations at a Census-block level (∼105,000 Census blocks). The hybrid modeling approach estimated 24% more on-road PM 2.5 -related premature mortality than CMAQ. The major difference is from the primary on-road PM 2.5 where the hybrid approach estimated 2.5 times more primary on-road PM 2.5 -related premature mortality than CMAQ due to predicted exposure hotspots near roadways that coincide with high population areas. The results show that 72% of primary on-road PM 2.5 premature mortality occurs within 1,000 m from roadways where 50% of the total population resides, highlighting the importance to characterize near-road primary PM 2.5 and suggesting that previous studies may have underestimated premature mortality due to PM 2.5 from traffic-related emissions. © 2017 Society for Risk Analysis.
Rong, Xing; Du, Yong; Frey, Eric C
2012-06-21
Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.
Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
Regionalising MUSLE factors for application to a data-scarce catchment
NASA Astrophysics Data System (ADS)
Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani
2018-04-01
The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.
Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.
2013-01-01
Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.
USDA-ARS?s Scientific Manuscript database
Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...
Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes
We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...
Exploring the observational constraints on the simulation of brown carbon
NASA Astrophysics Data System (ADS)
Wang, Xuan; Heald, Colette L.; Liu, Jiumeng; Weber, Rodney J.; Campuzano-Jost, Pedro; Jimenez, Jose L.; Schwarz, Joshua P.; Perring, Anne E.
2018-01-01
Organic aerosols (OA) that strongly absorb solar radiation in the near-UV are referred to as brown carbon (BrC). The sources, evolution, and optical properties of BrC remain highly uncertain and contribute significantly to uncertainty in the estimate of the global direct radiative effect (DRE) of aerosols. Previous modeling studies of BrC optical properties and DRE have been unable to fully evaluate model performance due to the lack of direct measurements of BrC absorption. In this study, we develop a global model simulation (GEOS-Chem) of BrC and test it against BrC absorption measurements from two aircraft campaigns in the continental US (SEAC4RS and DC3). To the best of our knowledge, this is the first study to compare simulated BrC absorption with direct aircraft measurements. We show that BrC absorption properties estimated based on previous laboratory measurements agree with the aircraft measurements of freshly emitted BrC absorption but overestimate aged BrC absorption. In addition, applying a photochemical scheme to simulate bleaching/degradation of BrC improves model skill. The airborne observations are therefore consistent with a mass absorption coefficient (MAC) of freshly emitted biomass burning OA of 1.33 m2 g-1 at 365 nm coupled with a 1-day whitening e-folding time. Using the GEOS-Chem chemical transport model integrated with the RRTMG radiative transfer model, we estimate that the top-of-the-atmosphere all-sky direct radiative effect (DRE) of OA is -0.344 Wm-2, 10 % higher than that without consideration of BrC absorption. Therefore, our best estimate of the absorption DRE of BrC is +0.048 Wm-2. We suggest that the DRE of BrC has been overestimated previously due to the lack of observational constraints from direct measurements and omission of the effects of photochemical whitening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter; Margolis, Robert; Melius, Jennifer
We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m 2 ofmore » suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft 2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.« less
Gagnon, Pieter; Margolis, Robert; Melius, Jennifer; ...
2018-01-05
We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m 2 ofmore » suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft 2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.« less
NASA Astrophysics Data System (ADS)
Gagnon, Pieter; Margolis, Robert; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2018-02-01
We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m2 of suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Wake Vortex Advisory System (WakeVAS) Evaluation of Impacts on the National Airspace System
NASA Technical Reports Server (NTRS)
Smith, Jeremy C.; Dollyhigh, Samuel M.
2005-01-01
This report is one of a series that describes an ongoing effort in high-fidelity modeling/simulation, evaluation and analysis of the benefits and performance metrics of the Wake Vortex Advisory System (WakeVAS) Concept of Operations being developed as part of the Virtual Airspace Modeling and Simulation (VAMS) project. A previous study, determined the overall increases in runway arrival rates that could be achieved at 12 selected airports due to WakeVAS reduced aircraft spacing under Instrument Meteorological Conditions. This study builds on the previous work to evaluate the NAS wide impacts of equipping various numbers of airports with WakeVAS. A queuing network model of the National Airspace System, built by the Logistics Management Institute, Mclean, VA, for NASA (LMINET) was used to estimate the reduction in delay that could be achieved by using WakeVAS under non-visual meteorological conditions for the projected air traffic demand in 2010. The results from LMINET were used to estimate the total annual delay reduction that could be achieved and from this, an estimate of the air carrier variable operating cost saving was made.
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Jastrzembski, Tiffany S; Charness, Neil
2007-12-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.
NASA Technical Reports Server (NTRS)
Desai, Pooja; Hauser, Dan; Sutherlin, Steven
2017-01-01
NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.
Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R
2016-10-01
Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Implications of Summer Learning Loss for Value-Added Estimates of Teacher Effectiveness
ERIC Educational Resources Information Center
Gershenson, Seth; Hayes, Michael S.
2018-01-01
School districts across the United States increasingly use value-added models (VAMs) to evaluate teachers. In practice, VAMs typically rely on lagged test scores from the previous academic year, which necessarily conflate summer with school-year learning and potentially bias estimates of teacher effectiveness. We investigate the practical…
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
Generation of real-time mode high-resolution water vapor fields from GPS observations
NASA Astrophysics Data System (ADS)
Yu, Chen; Penna, Nigel T.; Li, Zhenhong
2017-02-01
Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.
Jégu, Jérémie; Belot, Aurélien; Borel, Christian; Daubisse-Marliac, Laetitia; Trétarre, Brigitte; Ganry, Olivier; Guizard, Anne-Valérie; Bara, Simona; Troussard, Xavier; Bouvier, Véronique; Woronoff, Anne-Sophie; Colonna, Marc; Velten, Michel
2015-05-01
To provide head and neck squamous cell carcinoma (HNSCC) survival estimates with respect to patient previous history of cancer. Data from ten French population-based cancer registries were used to establish a cohort of all male patients presenting with a HNSCC diagnosed between 1989 and 2004. Vital status was updated until December 31, 2007. The 5-year overall and net survival estimates were assessed using the Kaplan-Meier and Pohar-Perme estimators, respectively. Multivariate Cox regression models were used to assess the effect of cancer history adjusted for age and year of HNSCC diagnosis. Among the cases of HNSCC, 5553 were localized in the oral cavity, 3646 in the oropharynx, 3793 in the hypopharynx and 4550 in the larynx. From 11.0% to 16.8% of patients presented with a previous history of cancer according to HNSCC. Overall and net survival were closely tied to the presence, or not, of a previous cancer. For example, for carcinoma of the oral cavity, the five-year overall survival was 14.0%, 5.9% and 36.7% in case of previous lung cancer, oesophagus cancer or no cancer history, respectively. Multivariate analyses showed that previous history of cancer was a prognosis factor independent of age and year of diagnosis (p<.001). Previous history of cancer is strongly associated with survival among HNSCC patients. Survival estimates based on patients' previous history of cancer will enable clinicians to assess more precisely the prognosis of their patients with respect to this major comorbid condition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Masbruch, Melissa D.; Gardner, Philip M.; Brooks, Lynette E.
2014-01-01
Snake Valley and surrounding areas, along the Utah-Nevada state border, are part of the Great Basin carbonate and alluvial aquifer system. The groundwater system in the study area consists of water in unconsolidated deposits in basins and water in consolidated rock underlying the basins and in the adjacent mountain blocks. Most recharge occurs from precipitation on the mountain blocks and most discharge occurs from the lower altitude basin-fill deposits mainly as evapotranspiration, springflow, and well withdrawals.The Snake Valley area regional groundwater system was simulated using a three-dimensional model incorporating both groundwater flow and heat transport. The model was constructed with MODFLOW-2000, a version of the U.S. Geological Survey’s groundwater flow model, and MT3DMS, a transport model that simulates advection, dispersion, and chemical reactions of solutes or heat in groundwater systems. Observations of groundwater discharge by evapotranspiration, springflow, mountain stream base flow, and well withdrawals; groundwater-level altitudes; and groundwater temperatures were used to calibrate the model. Parameter values estimated by regression analyses were reasonable and within the range of expected values.This study represents one of the first regional modeling efforts to include calibration to groundwater temperature data. The inclusion of temperature observations reduced parameter uncertainty, in some cases quite significantly, over using just water-level altitude and discharge observations. Of the 39 parameters used to simulate horizontal hydraulic conductivity, uncertainty on 11 of these parameters was reduced to one order of magnitude or less. Other significant reductions in parameter uncertainty occurred in parameters representing the vertical anisotropy ratio, drain and river conductance, recharge rates, and well withdrawal rates.The model provides a good representation of the groundwater system. Simulated water-level altitudes range over almost 2,000 meters (m); 98 percent of the simulated values of water-level altitudes in wells are within 30 m of observed water-level altitudes, and 58 percent of them are within 12 m. Nineteen of 20 simulated discharges are within 30 percent of observed discharge. Eighty-one percent of the simulated values of groundwater temperatures in wells are within 2 degrees Celsius (°C) of the observed values, and 55 percent of them are within 0.75 °C. The numerical model represents a more robust quantification of groundwater budget components than previous studies because the model integrates all components of the groundwater budget. The model also incorporates new data including (1) a detailed hydrogeologic framework, and (2) more observations, including several new water-level altitudes throughout the study area, several new measurements of spring discharge within Snake Valley which had not previously been monitored, and groundwater temperature data. Uncertainty in the estimates of subsurface flow are less than those of previous studies because the model balanced recharge and discharge across the entire simulated area, not just in each hydrographic area, and because of the large dataset of observations (water-level altitudes, discharge, and temperatures) used to calibrate the model and the resulting transmissivity distribution.Groundwater recharge from precipitation and unconsumed irrigation in Snake Valley is 160,000 acre-feet per year (acre-ft/yr), which is within the range of previous estimates. Subsurface inflow from southern Spring Valley to southern Snake Valley is 13,000 acre-ft/yr and is within the range of previous estimates; subsurface inflow from Spring Valley to Snake Valley north of the Snake Range, however, is only 2,200 acre-ft/yr, which is much less than has been previously estimated. Groundwater discharge from groundwater evapotranspiration and springs is 100,000 acre-ft/yr, and discharge to mountain streams is 3,300 acre-ft/yr; these are within the range of previous estimates. Current well withdrawals are 28,000 acre-ft/yr. Subsurface outflow from Snake Valley moves into Pine Valley (2,000 acre-ft/yr), Wah Wah Valley (23 acre-ft/yr), Tule Valley (33,000 acre-ft/yr), Fish Springs Flat (790 acre-ft/yr), and outside of the study area towards Great Salt Lake Desert (8,400 acre-ft/yr); these outflows, totaling about 44,000 acre-ft/yr, are within the range of previous estimates.The subsurface flow amounts indicate the degree of connectivity between hydrographic areas within the study area. The simulated transmissivity and locations of natural discharge, however, provide a better estimate of the effect of groundwater withdrawals on groundwater resources than does the amount and direction of subsurface flow between hydrographic areas. The distribution of simulated transmissivity throughout the study area includes many areas of high transmissivity within and between hydrographic areas. Increased well withdrawals within these high transmissivity areas will likely affect a large part of the study area, resulting in declining groundwater levels, as well as leading to a decrease in natural discharge to springs and evapotranspiration.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes
NASA Astrophysics Data System (ADS)
Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.
2018-03-01
Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.
Birnbaum; Zimmermann
1998-05-01
Judges evaluated buying and selling prices of hypothetical investments, based on the previous price of each investment and estimates of the investment's future value given by advisors of varied expertise. Effect of a source's estimate varied in proportion to the source's expertise, and it varied inversely with the number and expertise of other sources. There was also a configural effect in which the effect of a source's estimate was affected by the rank order of that source's estimate, in relation to other estimates of the same investment. These interactions were fit with a configural weight averaging model in which buyers and sellers place different weights on estimates of different ranks. This model implies that one can design a new experiment in which there will be different violations of joint independence in different viewpoints. Experiment 2 confirmed patterns of violations of joint independence predicted from the model fit in Experiment 1. Experiment 2 also showed that preference reversals between viewpoints can be predicted by the model of Experiment 1. Configural weighting provides a better account of buying and selling prices than either of two models of loss aversion or the theory of anchoring and insufficient adjustment. Copyright 1998 Academic Press.
Estimation of viscous dissipation in nanodroplet impact and spreading
NASA Astrophysics Data System (ADS)
Li, Xin-Hao; Zhang, Xiang-Xiong; Chen, Min
2015-05-01
The developments in nanocoating and nanospray technology have resulted in the increasing importance of the impact of micro-/nanoscale liquid droplets on solid surface. In this paper, the impact of a nanodroplet on a flat solid surface is examined using molecular dynamics simulations. The impact velocity ranges from 58 m/s to 1044 m/s, in accordance with the Weber number ranging from 0.62 to 200.02 and the Reynolds number ranging from 0.89 to 16.14. The obtained maximum spreading factors are compared with previous models in the literature. The predicted results from the previous models largely deviate from our simulation results, with mean relative errors up to 58.12%. The estimated viscous dissipation is refined to present a modified theoretical model, which reduces the mean relative error to 15.12% in predicting the maximum spreading factor for cases of nanodroplet impact.
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.
2017-12-01
A precise estimation of groundwater fluctuation is studied by considering delayed recharge flux (DRF) and unsaturated zone drainage (UZD). Both DRF and UZD are due to gravitational flow impeded in the unsaturated zone, which may nonnegligibly affect groundwater level changes. In the validation, a previous model without the consideration of unsaturated flow is benchmarked where the actual groundwater level and precipitation data are divided into three periods based on the climatic condition. The estimation capability of the new model is superior to the benchmarked model as indicated by the significantly improved representation of groundwater level with physically interpretable model parameters.
NASA Astrophysics Data System (ADS)
Gardiner, John Corby
The electric power industry market structure has changed over the last twenty years since the passage of the Public Utility Regulatory Policies Act (PURPA). These changes include the entry by unregulated generator plants and, more recently, the deregulation of entry and price in the retail generation market. Such changes have introduced and expanded competitive forces on the incumbent electric power plants. Proponents of this deregulation argued that the enhanced competition would lead to a more efficient allocation of resources. Previous studies of power plant technical and allocative efficiency have failed to measure technical and allocative efficiency at the plant level. In contrast, this study uses panel data on 35 power plants over 59 years to estimate technical and allocative efficiency of each plant. By using a flexible functional form, which is not constrained by the assumption that regulation is constant over the 59 years sampled, the estimation procedure accounts for changes in both state and national regulatory/energy policies that may have occurred over the sample period. The empirical evidence presented shows that most of the power plants examined have operated more efficiently since the passage of PURPA and the resultant increase of competitive forces. Chapter 2 extends the model used in Chapter 1 and clarifies some issues in the efficiency literature by addressing the case where homogeneity does not hold. A more general model is developed for estimating both input and output inefficiency simultaneously. This approach reveals more information about firm inefficiency than the single estimation approach that has previously been used in the literature. Using the more general model, estimates are provided on the type of inefficiency that occurs as well as the cost of inefficiency by type of inefficiency. In previous studies, the ranking of firms by inefficiency has been difficult because of the cardinal and ordinal differences between different types of inefficiency estimates. However, using the general approach, this study illustrates that plants can be ranked by overall efficiency.
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
Kinetic modelling of a diesel-polluted clayey soil bioremediation process.
Fernández, Engracia Lacasa; Merlo, Elena Moliterni; Mayor, Lourdes Rodríguez; Camacho, José Villaseñor
2016-07-01
A mathematical model is proposed to describe a diesel-polluted clayey soil bioremediation process. The reaction system under study was considered a completely mixed closed batch reactor, which initially contacted a soil matrix polluted with diesel hydrocarbons, an aqueous liquid-specific culture medium and a microbial inoculation. The model coupled the mass transfer phenomena and the distribution of hydrocarbons among four phases (solid, S; water, A; non-aqueous liquid, NAPL; and air, V) with Monod kinetics. In the first step, the model simulating abiotic conditions was used to estimate only the mass transfer coefficients. In the second step, the model including both mass transfer and biodegradation phenomena was used to estimate the biological kinetic and stoichiometric parameters. In both situations, the model predictions were validated with experimental data that corresponded to previous research by the same authors. A correct fit between the model predictions and the experimental data was observed because the modelling curves captured the major trends for the diesel distribution in each phase. The model parameters were compared to different previously reported values found in the literature. Pearson correlation coefficients were used to show the reproducibility level of the model. Copyright © 2016. Published by Elsevier B.V.
2011-01-01
Background The identification of genes or quantitative trait loci that are expressed in response to different environmental factors such as temperature and light, through functional mapping, critically relies on precise modeling of the covariance structure. Previous work used separable parametric covariance structures, such as a Kronecker product of autoregressive one [AR(1)] matrices, that do not account for interaction effects of different environmental factors. Results We implement a more robust nonparametric covariance estimator to model these interactions within the framework of functional mapping of reaction norms to two signals. Our results from Monte Carlo simulations show that this estimator can be useful in modeling interactions that exist between two environmental signals. The interactions are simulated using nonseparable covariance models with spatio-temporal structural forms that mimic interaction effects. Conclusions The nonparametric covariance estimator has an advantage over separable parametric covariance estimators in the detection of QTL location, thus extending the breadth of use of functional mapping in practical settings. PMID:21269481
Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling
NASA Astrophysics Data System (ADS)
Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.
2018-05-01
Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.
Miller, Justin B; Axelrod, Bradley N; Schutte, Christian
2012-01-01
The recent release of the Wechsler Memory Scale Fourth Edition contains many improvements from a theoretical and administration perspective, including demographic corrections using the Advanced Clinical Solutions. Although the administration time has been reduced from previous versions, a shortened version may be desirable in certain situations given practical time limitations in clinical practice. The current study evaluated two- and three-subtest estimations of demographically corrected Immediate and Delayed Memory index scores using both simple arithmetic prorating and regression models. All estimated values were significantly associated with observed index scores. Use of Lin's Concordance Correlation Coefficient as a measure of agreement showed a high degree of precision and virtually zero bias in the models, although the regression models showed a stronger association than prorated models. Regression-based models proved to be more accurate than prorated estimates with less dispersion around observed values, particularly when using three subtest regression models. Overall, the present research shows strong support for estimating demographically corrected index scores on the WMS-IV in clinical practice with an adequate performance using arithmetically prorated models and a stronger performance using regression models to predict index scores.
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun
2014-01-01
Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
A three-dimensional model of Tangential YORP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golubov, O.; Scheeres, D. J.; Krugly, Yu. N., E-mail: golubov@astron.kharkov.ua
2014-10-10
Tangential YORP, or TYORP, has recently been demonstrated to be an important factor in the evolution of an asteroid's rotation state. It is complementary to normal YORP, or NYORP, which used to be considered previously. While NYORP is produced by non-symmetry in the large-scale geometry of an asteroid, TYORP is due to heat conductivity in stones on the surface of the asteroid. To date, TYORP has been studied only in a simplified one-dimensional model, substituting stones with high long walls. This article for the first time considers TYORP in a realistic three-dimensional model, also including shadowing and self-illumination effects viamore » ray tracing. TYORP is simulated for spherical stones lying on regolith. The model includes only five free parameters and the dependence of the TYORP on each of them is studied. The TYORP torque appears to be smaller than previous estimates from the one-dimensional model, but is still comparable to the NYORP torques. These results can be used to estimate TYORP of different asteroids and also as a basis for more sophisticated models of TYORP.« less
Mars Propellant Liquefaction Modeling in Thermal Desktop
NASA Technical Reports Server (NTRS)
Desai, Pooja; Hauser, Dan; Sutherlin, Steven
2017-01-01
NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Nerem, R. S.; Putney, B. H.; Felsentreger, T. L.; Sanchez, B. V.; Klosko, S. M.; Patel, G. B.; Williamson, R. G.; Chinn, D. S.; Chan, J. C.
1992-01-01
Improved models of the Earth's gravitational field have been developed from conventional tracking data and from a combination of satellite tracking, satellite altimeter and surface gravimetric data. This combination model represents a significant improvement in the modeling of the gravity field at half-wavelengths of 300 km and longer. Both models are complete to degree and order 50. The Goddard Earth Model-T3 (GEM-T3) provides more accurate computation of satellite orbital effects as well as giving superior geoidal representation from that achieved in any previous GEM. A description of the models, their development and an assessment of their accuracy is presented. The GEM-T3 model used altimeter data from previous satellite missions in estimating the orbits, geoid, and dynamic height fields. Other satellite tracking data are largely the same as was used to develop GEM-T2, but contain certain important improvements in data treatment and expanded laser tracking coverage. Over 1300 arcs of tracking data from 31 different satellites have been used in the solution. Reliable estimates of the model uncertainties via error calibration and optimal data weighting techniques are discussed.
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
NASA Astrophysics Data System (ADS)
Aimran, Ahmad Nazim; Ahmad, Sabri; Afthanorhan, Asyraf; Awang, Zainudin
2017-05-01
Structural equation modeling (SEM) is the second generation statistical analysis technique developed for analyzing the inter-relationships among multiple variables in a model. Previous studies have shown that there seemed to be at least an implicit agreement about the factors that should drive the choice between covariance-based structural equation modeling (CB-SEM) and partial least square path modeling (PLS-PM). PLS-PM appears to be the preferred method by previous scholars because of its less stringent assumption and the need to avoid the perceived difficulties in CB-SEM. Along with this issue has been the increasing debate among researchers on the use of CB-SEM and PLS-PM in studies. The present study intends to assess the performance of CB-SEM and PLS-PM as a confirmatory study in which the findings will contribute to the body of knowledge of SEM. Maximum likelihood (ML) was chosen as the estimator for CB-SEM and was expected to be more powerful than PLS-PM. Based on the balanced experimental design, the multivariate normal data with specified population parameter and sample sizes were generated using Pro-Active Monte Carlo simulation, and the data were analyzed using AMOS for CB-SEM and SmartPLS for PLS-PM. Comparative Bias Index (CBI), construct relationship, average variance extracted (AVE), composite reliability (CR), and Fornell-Larcker criterion were used to study the consequence of each estimator. The findings conclude that CB-SEM performed notably better than PLS-PM in estimation for large sample size (100 and above), particularly in terms of estimations accuracy and consistency.
Cataife, Guido
2014-03-01
We propose the use of previously developed small area estimation techniques to monitor obesity and dietary habits in developing countries and apply the model to Rio de Janeiro city. We estimate obesity prevalence rates at the Census Tract through a combinatorial optimization spatial microsimulation model that matches body mass index and socio-demographic data in Brazil's 2008-9 family expenditure survey with Census 2010 socio-demographic data. Obesity ranges from 8% to 25% in most areas and affects the poor almost as much as the rich. Male and female obesity rates are uncorrelated at the small area level. The model is an effective tool to understand the complexity of the problem and to aid in policy design. © 2013 Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector
2018-01-01
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644
NASA Instrument Cost/Schedule Model
NASA Technical Reports Server (NTRS)
Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George
2011-01-01
NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.
Estimating Power System Dynamic States Using Extended Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw
2014-10-31
Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less
NASA Technical Reports Server (NTRS)
Parke, M. E.
1978-01-01
Two trends evident in global tidal modelling since the first GEOP conference in 1972 are described. The first centers on the incorporation of terms for ocean loading and gravitational self attraction into Laplace's tidal equations. The second centers on a better understanding of the problem of near resonant modelling and the need for realistic maps of tidal elevation for use by geodesists and geophysicists. Although new models still show significant differences, especially in the South Atlantic, there are significant similarities in many of the world's oceans. This allows suggestions to be made for future locations for bottom pressure gauge measurements. Where available, estimates of M2 tidal dissipation from the new models are significantly lower than estimates from previous models.
A Stochastic Evolutionary Model for Protein Structure Alignment and Phylogeny
Challis, Christopher J.; Schmidler, Scott C.
2012-01-01
We present a stochastic process model for the joint evolution of protein primary and tertiary structure, suitable for use in alignment and estimation of phylogeny. Indels arise from a classic Links model, and mutations follow a standard substitution matrix, whereas backbone atoms diffuse in three-dimensional space according to an Ornstein–Uhlenbeck process. The model allows for simultaneous estimation of evolutionary distances, indel rates, structural drift rates, and alignments, while fully accounting for uncertainty. The inclusion of structural information enables phylogenetic inference on time scales not previously attainable with sequence evolution models. The model also provides a tool for testing evolutionary hypotheses and improving our understanding of protein structural evolution. PMID:22723302
Decentralization and the Composition of Public Expenditures
2012-01-01
decentralized governance and expenditure composition by means of a distance-sensitive representative agent model. Then we estimate the impact of fiscal...countries with regards to population age structure. We are not certain of what effects that may have in our estimates , but previous studies have found find...variables to estimate a scalar value for g(xβ), which then is multiplied to each variables coefficient. For this, we choose the mean values of the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, J. K.; Pukayastha, A.; Martin, C.
Previous estimates of the wind resources in Uttarakhand, India, suggest minimal wind resources in this region. To explore whether or not the complex terrain in fact provides localized regions of wind resource, the authors of this study employed a dynamic down scaling method with the Weather Research and Forecasting model, providing detailed estimates of winds at approximately 1 km resolution in the finest nested simulation.
CO2 forcing induces semi-direct effects with consequences for climate feedback interpretations
NASA Astrophysics Data System (ADS)
Andrews, Timothy; Forster, Piers M.
2008-02-01
Climate forcing and feedbacks are diagnosed from seven slab-ocean GCMs for 2 × CO2 using a regression method. Results are compared to those using conventional methodologies to derive a semi-direct forcing due to tropospheric adjustment, analogous to the semi-direct effect of absorbing aerosols. All models show a cloud semi-direct effect, indicating a rapid cloud response to CO2; cloud typically decreases, enhancing the warming. Similarly there is evidence of semi-direct effects from water-vapour, lapse-rate, ice and snow. Previous estimates of climate feedbacks are unlikely to have taken these semi-direct effects into account and so misinterpret processes as feedbacks that depend only on the forcing, but not the global surface temperature. We show that the actual cloud feedback is smaller than what previous methods suggest and that a significant part of the cloud response and the large spread between previous model estimates of cloud feedback is due to the semi-direct forcing.
ERIC Educational Resources Information Center
Goldhaber, Dan; Destler, Katharine; Player, Daniel
2010-01-01
Some scholars and policymakers who are concerned about the inequitable distribution of quality teachers suggest offering financial incentives for working in hard-to-staff schools. Previous studies have estimated compensating differentials using hedonic modeling, an approach potentially undermined by district-wide salary schedules and the lack of…
Using step and path selection functions for estimating resistance to movement: Pumas as a case study
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2015-01-01
GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....
Non-Targeted Effects and the Dose Response for Heavy Ion Tumorigenesis
NASA Technical Reports Server (NTRS)
Chappell, Lori J.; Cucinotta, Francis A.
2010-01-01
There is no human epidemiology data available to estimate the heavy ion cancer risks experienced by astronauts in space. Studies of tumor induction in mice are a necessary step to estimate risks to astronauts. Previous experimental data can be better utilized to model dose response for heavy ion tumorigenesis and plan future low dose studies.
NASA Astrophysics Data System (ADS)
Kotrlová, Andrea; Török, Gabriel; Šrámková, Eva; Stuchlík, Zdeněk
2014-12-01
We have previously applied several models of high-frequency quasi-periodic oscillations (HF QPOs) to estimate the spin of the central Kerr black hole in the three Galactic microquasars, GRS 1915+105, GRO J1655-40, and XTE J1550-564. Here we explore the alternative possibility that the central compact body is a super-spinning object (or a naked singularity) with the external space-time described by Kerr geometry with a dimensionless spin parameter a ≡ cJ/GM2> 1. We calculate the relevant spin intervals for a subset of HF QPO models considered in the previous study. Our analysis indicates that for all but one of the considered models there exists at least one interval of a> 1 that is compatible with constraints given by the ranges of the central compact object mass independently estimated for the three sources. For most of the models, the inferred values of a are several times higher than the extreme Kerr black hole value a = 1. These values may be too high since the spin of superspinars is often assumed to rapidly decrease due to accretion when a ≫ 1. In this context, we conclude that only the epicyclic and the Keplerian resonance model provides estimates that are compatible with the expectation of just a small deviation from a = 1.
Approximation of epidemic models by diffusion processes and their statistical inference.
Guy, Romain; Larédo, Catherine; Vergu, Elisabeta
2015-02-01
Multidimensional continuous-time Markov jump processes [Formula: see text] on [Formula: see text] form a usual set-up for modeling [Formula: see text]-like epidemics. However, when facing incomplete epidemic data, inference based on [Formula: see text] is not easy to be achieved. Here, we start building a new framework for the estimation of key parameters of epidemic models based on statistics of diffusion processes approximating [Formula: see text]. First, previous results on the approximation of density-dependent [Formula: see text]-like models by diffusion processes with small diffusion coefficient [Formula: see text], where [Formula: see text] is the population size, are generalized to non-autonomous systems. Second, our previous inference results on discretely observed diffusion processes with small diffusion coefficient are extended to time-dependent diffusions. Consistent and asymptotically Gaussian estimates are obtained for a fixed number [Formula: see text] of observations, which corresponds to the epidemic context, and for [Formula: see text]. A correction term, which yields better estimates non asymptotically, is also included. Finally, performances and robustness of our estimators with respect to various parameters such as [Formula: see text] (the basic reproduction number), [Formula: see text], [Formula: see text] are investigated on simulations. Two models, [Formula: see text] and [Formula: see text], corresponding to single and recurrent outbreaks, respectively, are used to simulate data. The findings indicate that our estimators have good asymptotic properties and behave noticeably well for realistic numbers of observations and population sizes. This study lays the foundations of a generic inference method currently under extension to incompletely observed epidemic data. Indeed, contrary to the majority of current inference techniques for partially observed processes, which necessitates computer intensive simulations, our method being mostly an analytical approach requires only the classical optimization steps.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assimilative modeling of low latitude ionosphere
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Wang, Chunining; Hajj, George A.; Rosen, I. Gary; Wilson, Brian D.; Mannucci, Anthony J.
2004-01-01
In this paper we present an observation system simulation experiment for modeling low-latitude ionosphere using a 3-dimensional (3-D) global assimilative ionospheric model (GAIM). The experiment is conducted to test the effectiveness of GAIM with a 4-D variational approach (4DVAR) in estimation of the ExB drift and thermospheric wind in the magnetic meridional planes simultaneously for all longitude or local time sectors. The operational Global Positioning System (GPS) satellites and the ground-based global GPS receiver network of the International GPS Service are used in the experiment as the data assimilation source. 'The optimization of the ionospheric state (electron density) modeling is performed through a nonlinear least-squares minimization process that adjusts the dynamical forces to reduce the difference between the modeled and observed slant total electron content in the entire modeled region. The present experiment for multiple force estimations reinforces our previous assessment made through single driver estimations conducted for the ExB drift only.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Ray, A.; Key, K.
2017-12-01
Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
Recovering Parameters of Johnson's SB Distribution
Bernard R. Parresol
2003-01-01
A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...
LeBlanc, Mallory; Allen, Joseph G; Herrick, Robert F; Stewart, James H
2018-03-01
The Advanced Reach Tool V1.5 (ART) is a mathematical model for occupational exposures conceptually based on, but implemented differently than, the "classic" Near Field/Far Field (NF/FF) exposure model. The NF/FF model conceptualizes two distinct exposure "zones"; the near field, within approximately 1m of the breathing zone, and the far field, consisting of the rest of the room in which the exposure occurs. ART has been reported to provide "realistic and reasonable worst case" estimates of the exposure distribution. In this study, benzene exposure during the use of a metal parts washer was modeled using ART V1.5, and compared to actual measured workers samples and to NF/FF model results from three previous studies. Next, the exposure concentrations expected to be exceeded 25%, 10% and 5% of the time for the exposure scenario were calculated using ART. Lastly, ART exposure estimates were compared with and without Bayesian adjustment. The modeled parts washing benzene exposure scenario included distinct tasks, e.g. spraying, brushing, rinsing and soaking/drying. Because ART can directly incorporate specific types of tasks that are part of the exposure scenario, the present analysis identified each task's determinants of exposure and performance time, thus extending the work of the previous three studies where the process of parts washing was modeled as one event. The ART 50th percentile exposure estimate for benzene (0.425ppm) more closely approximated the reported measured mean value of 0.50ppm than the NF/FF model estimates of 0.33ppm, 0.070ppm or 0.2ppm obtained from other modeling studies of this exposure scenario. The ART model with the Bayesian analysis provided the closest estimate to the measured value (0.50ppm). ART (with Bayesian adjustment) was then used to assess the 75th, the 90th and 95th percentile exposures, predicting that on randomly selected days during this parts washing exposure scenario, 25% of the benzene exposures would be above 0.70ppm; 10% above 0.95ppm; and 5% above 1.15ppm. These exposure estimates at the three different percentiles of the ART exposure distribution refer to the modeled exposure scenario not a specific workplace or worker. This study provides a detailed comparison of modeling tools currently available to occupational hygienists and other exposure assessors. Possible applications are considered. Copyright © 2017 Elsevier GmbH. All rights reserved.
Theoretical study of gas hydrate decomposition kinetics: model predictions.
Windmeier, Christoph; Oellrich, Lothar R
2013-11-27
In order to provide an estimate of intrinsic gas hydrate dissolution and dissociation kinetics, the Consecutive Desorption and Melting Model (CDM) was developed in a previous publication (Windmeier, C.; Oellrich, L. R. J. Phys. Chem. A 2013, 117, 10151-10161). In this work, an extensive summary of required model data is given. Obtained model predictions are discussed with respect to their temperature dependence as well as their significance for technically relevant areas of gas hydrate decomposition. As a result, an expression for determination of the intrinsic gas hydrate decomposition kinetics for various hydrate formers is given together with an estimate for the maximum possible rates of gas hydrate decomposition.
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S
2016-01-01
Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.
Lupo, Philip J; Symanski, Elaine
2009-11-01
Often, in studies evaluating the health effects of hazardous air pollutants (HAPs), researchers rely on ambient air levels to estimate exposure. Two potential data sources are modeled estimates from the U.S. Environmental Protection Agency (EPA) Assessment System for Population Exposure Nationwide (ASPEN) and ambient air pollutant measurements from monitoring networks. The goal was to conduct comparisons of modeled and monitored estimates of HAP levels in the state of Texas using traditional approaches and a previously unexploited method, concordance correlation analysis, to better inform decisions regarding agreement. Census tract-level ASPEN estimates and monitoring data for all HAPs throughout Texas, available from the EPA Air Quality System, were obtained for 1990, 1996, and 1999. Monitoring sites were mapped to census tracts using U.S. Census data. Exclusions were applied to restrict the monitored data to measurements collected using a common sampling strategy with minimal missing values over time. Comparisons were made for 28 HAPs in 38 census tracts located primarily in urban areas throughout Texas. For each pollutant and by year of assessment, modeled and monitored air pollutant annual levels were compared using standard methods (i.e., ratios of model-to-monitor annual levels). Concordance correlation analysis was also used, which assesses linearity and agreement while providing a formal method of statistical inference. Forty-eight percent of the median model-to-monitor values fell between 0.5 and 2, whereas only 17% of concordance correlation coefficients were significant and greater than 0.5. On the basis of concordance correlation analysis, the findings indicate there is poorer agreement when compared with the previously applied ad hoc methods to assess comparability between modeled and monitored levels of ambient HAPs.
Comparing population size estimators for plethodontid salamanders
Bailey, L.L.; Simons, T.R.; Pollock, K.H.
2004-01-01
Despite concern over amphibian declines, few studies estimate absolute abundances because of logistic and economic constraints and previously poor estimator performance. Two estimation approaches recommended for amphibian studies are mark-recapture and depletion (or removal) sampling. We compared abundance estimation via various mark-recapture and depletion methods, using data from a three-year study of terrestrial salamanders in Great Smoky Mountains National Park. Our results indicate that short-term closed-population, robust design, and depletion methods estimate surface population of salamanders (i.e., those near the surface and available for capture during a given sampling occasion). In longer duration studies, temporary emigration violates assumptions of both open- and closed-population mark-recapture estimation models. However, if the temporary emigration is completely random, these models should yield unbiased estimates of the total population (superpopulation) of salamanders in the sampled area. We recommend using Pollock's robust design in mark-recapture studies because of its flexibility to incorporate variation in capture probabilities and to estimate temporary emigration probabilities.
Estimates of movement and site fidelity using mark-resight data of wintering Canada geese
Hestbeck, J.B.; Nichols, J.D.; Malecki, R.A.
1991-01-01
Population ecologists have devoted disproportionate attention to the estimation and study of birth and death rates and far less effort to rates of movement. Movement and fidelity to wintering areas have important ecological and evolutionary implications for avian populations. Previous inferences about movement among and fidelity to wintering areas have been restricted by limitations of data and methodology. We use multiple observation data from a large-scale capture-resighting study of Canada Geese in the Atlantic flyway to estimate probabilities of returning to previous wintering locations and moving to new locations. Mark-resight data from 28,849 Canada Geese (Branta canadensis) banded woth individually coded neck bands in the mid-Atlantic (New York, Pennsylvania, New Jersey), Chesapeake (Delaware, Maryland, Virginia), and Carolinas (North and South Carolina) were used to estimate movement and site-fidelity. Two three-sample mark-resight models were developed and programmed using SURVIV to estimate the probability of moving among or remaining within these three wintering regions. The model (MV2) that incorporated tradition' or memory of previous wintering regions fit the data better than the model (MV1) that assumes that a first-order Markov chain described movement among regions. Considerable levels of movement occured among regions of the Atlantic flyway. The annual probability of remaining in the same region for two successive winters, used as a measure of site fidelity, was 0.710 plus or minus 0.016 (estimated mean plus or minus SE, 0.889 plus or minus 0.006, and 0.562 plus or minus 0.025, for the mid-Atlantic, Chesapeake, and Carolinas, respectively. The estimated probability of moving between years corresponded to changes in winter harshness. In warm years, geese moved north and in cold years, they moved south. Geese had a high probability of moving to and remaining in the Chesapeake. Annual changes in the movement probabilities did not correspond to annual changes in the United States Fish and Wildlife midwinter survey. Considerable numbers of geese from the Carolinas appeared to be wintering in more northerly locations (short-stopped) in subsequent winters.
NASA Technical Reports Server (NTRS)
Naesset, Erik; Gobakken, Terje; Bollandsas, Ole Martin; Gregoire, Timothy G.; Nelson, Ross; Stahl, Goeran
2013-01-01
Airborne scanning LiDAR (Light Detection and Ranging) has emerged as a promising tool to provide auxiliary data for sample surveys aiming at estimation of above-ground tree biomass (AGB), with potential applications in REDD forest monitoring. For larger geographical regions such as counties, states or nations, it is not feasible to collect airborne LiDAR data continuously ("wall-to-wall") over the entire area of interest. Two-stage cluster survey designs have therefore been demonstrated by which LiDAR data are collected along selected individual flight-lines treated as clusters and with ground plots sampled along these LiDAR swaths. Recently, analytical AGB estimators and associated variance estimators that quantify the sampling variability have been proposed. Empirical studies employing these estimators have shown a seemingly equal or even larger uncertainty of the AGB estimates obtained with extensive use of LiDAR data to support the estimation as compared to pure field-based estimates employing estimators appropriate under simple random sampling (SRS). However, comparison of uncertainty estimates under SRS and sophisticated two-stage designs is complicated by large differences in the designs and assumptions. In this study, probability-based principles to estimation and inference were followed. We assumed designs of a field sample and a LiDAR-assisted survey of Hedmark County (HC) (27,390 km2), Norway, considered to be more comparable than those assumed in previous studies. The field sample consisted of 659 systematically distributed National Forest Inventory (NFI) plots and the airborne scanning LiDAR data were collected along 53 parallel flight-lines flown over the NFI plots. We compared AGB estimates based on the field survey only assuming SRS against corresponding estimates assuming two-phase (double) sampling with LiDAR and employing model-assisted estimators. We also compared AGB estimates based on the field survey only assuming two-stage sampling (the NFI plots being grouped in clusters) against corresponding estimates assuming two-stage sampling with the LiDAR and employing model-assisted estimators. For each of the two comparisons, the standard errors of the AGB estimates were consistently lower for the LiDAR-assisted designs. The overall reduction of the standard errors in the LiDAR-assisted estimation was around 40-60% compared to the pure field survey. We conclude that the previously proposed two-stage model-assisted estimators are inappropriate for surveys with unequal lengths of the LiDAR flight-lines and new estimators are needed. Some options for design of LiDAR-assisted sample surveys under REDD are also discussed, which capitalize on the flexibility offered when the field survey is designed as an integrated part of the overall survey design as opposed to previous LiDAR-assisted sample surveys in the boreal and temperate zones which have been restricted by the current design of an existing NFI.
Closed-form estimates of the domain of attraction for nonlinear systems via fuzzy-polynomial models.
Pitarch, José Luis; Sala, Antonio; Ariño, Carlos Vicente
2014-04-01
In this paper, the domain of attraction of the origin of a nonlinear system is estimated in closed form via level sets with polynomial boundaries, iteratively computed. In particular, the domain of attraction is expanded from a previous estimate, such as a classical Lyapunov level set. With the use of fuzzy-polynomial models, the domain of attraction analysis can be carried out via sum of squares optimization and an iterative algorithm. The result is a function that bounds the domain of attraction, free from the usual restriction of being positive and decrescent in all the interior of its level sets.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
Estimating and modeling the cure fraction in population-based cancer survival analysis.
Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W
2007-07-01
In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.
ImSET: Impact of Sector Energy Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roop, Joseph M.; Scott, Michael J.; Schultz, Robert W.
2005-07-19
This version of the Impact of Sector Energy Technologies (ImSET) model represents the ''next generation'' of the previously developed Visual Basic model (ImBUILD 2.0) that was developed in 2003 to estimate the macroeconomic impacts of energy-efficient technology in buildings. More specifically, a special-purpose version of the 1997 benchmark national Input-Output (I-O) model was designed specifically to estimate the national employment and income effects of the deployment of Office of Energy Efficiency and Renewable Energy (EERE) -developed energy-saving technologies. In comparison with the previous versions of the model, this version allows for more complete and automated analysis of the essential featuresmore » of energy efficiency investments in buildings, industry, transportation, and the electric power sectors. This version also incorporates improvements in the treatment of operations and maintenance costs, and improves the treatment of financing of investment options. ImSET is also easier to use than extant macroeconomic simulation models and incorporates information developed by each of the EERE offices as part of the requirements of the Government Performance and Results Act.« less
CAT Model with Personalized Algorithm for Evaluation of Estimated Student Knowledge
ERIC Educational Resources Information Center
Andjelic, Svetlana; Cekerevac, Zoran
2014-01-01
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…
Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.
The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1989-01-01
Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Respiratory rate estimation during triage of children in hospitals.
Shah, Syed Ahmar; Fleming, Susannah; Thompson, Matthew; Tarassenko, Lionel
2015-01-01
Accurate assessment of a child's health is critical for appropriate allocation of medical resources and timely delivery of healthcare in Emergency Departments. The accurate measurement of vital signs is a key step in the determination of the severity of illness and respiratory rate is currently the most difficult vital sign to measure accurately. Several previous studies have attempted to extract respiratory rate from photoplethysmogram (PPG) recordings. However, the majority have been conducted in controlled settings using PPG recordings from healthy subjects. In many studies, manual selection of clean sections of PPG recordings was undertaken before assessing the accuracy of the signal processing algorithms developed. Such selection procedures are not appropriate in clinical settings. A major limitation of AR modelling, previously applied to respiratory rate estimation, is an appropriate selection of model order. This study developed a novel algorithm that automatically estimates respiratory rate from a median spectrum constructed applying multiple AR models to processed PPG segments acquired with pulse oximetry using a finger probe. Good-quality sections were identified using a dynamic template-matching technique to assess PPG signal quality. The algorithm was validated on 205 children presenting to the Emergency Department at the John Radcliffe Hospital, Oxford, UK, with reference respiratory rates up to 50 breaths per minute estimated by paediatric nurses. At the time of writing, the authors are not aware of any other study that has validated respiratory rate estimation using data collected from over 200 children in hospitals during routine triage.
Wilson, R; Abbott, J H
2018-04-01
To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Diegoli, Toni Marie; Rohde, Heinrich; Borowski, Stefan; Krawczak, Michael; Coble, Michael D; Nothnagel, Michael
2016-11-01
Typing of X chromosomal short tandem repeat (X STR) markers has become a standard element of human forensic genetic analysis. Joint consideration of many X STR markers at a time increases their discriminatory power but, owing to physical linkage, requires inter-marker recombination rates to be accurately known. We estimated the recombination rates between 15 well established X STR markers using genotype data from 158 families (1041 individuals) and following a previously proposed likelihood-based approach that allows for single-step mutations. To meet the computational requirements of this family-based type of analysis, we modified a previous implementation so as to allow multi-core parallelization on a high-performance computing system. While we obtained recombination rate estimates larger than zero for all but one pair of adjacent markers within the four previously proposed linkage groups, none of the three X STR pairs defining the junctions of these groups yielded a recombination rate estimate of 0.50. Corroborating previous studies, our results therefore argue against a simple model of independent X chromosomal linkage groups. Moreover, the refined recombination fraction estimates obtained in our study will facilitate the appropriate joint consideration of all 15 investigated markers in forensic analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Venus - Global gravity and topography
NASA Technical Reports Server (NTRS)
Mcnamee, J. B.; Borderies, N. J.; Sjogren, W. L.
1993-01-01
A new gravity field determination that has been produced combines both the Pioneer Venus Orbiter (PVO) and the Magellan Doppler radio data. Comparisonsbetween this estimate, a spherical harmonic model of degree and order 21, and previous models show that significant improvements have been made. Results are displayed as gravity contours overlaying a topographic map. We also calculate a new spherical harmonic model of topography based on Magellan altimetry, with PVO altimetry included where gaps exist in the Magellan data. This model is also of degree and order 21, so in conjunction with the gravity model, Bouguer and isostatic anomaly maps can be produced. These results are very consistent with previous results, but reveal more spatial resolution in the higher latitudes.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Influences of High School Curriculum on Determinants of Labor Market Experiences.
ERIC Educational Resources Information Center
Gardner, John A.; And Others
This study extends previous research on labor market effects of vocational education by explicitly modeling the intervening factors in the relationship between secondary vocational education and labor market outcomes. The strategy is to propose and estimate a simplified, recursive model that can contribute to understanding why positive earnings…
Recent literature has shown that bioavailability-based techniques, such as Tenax extraction, can estimate sediment exposure to benthos. In a previous study by the authors,Tenax extraction was used to create and validate a literature-based Tenax model to predict oligochaete bioac...
Tree mortality risk of oak due to gypsy moth
K.W. Gottschalk; J.J. Colbert; D.L. Feicht
1998-01-01
We present prediction models for estimating tree mortality resulting from gypsy moth, Lymantria dispar, defoliation in mixed oak, Quercus sp., forests. These models differ from previous work by including defoliation as a factor in the analysis. Defoliation intensity, initial tree crown condition (crown vigour), crown position, and...
Price elasticity reconsidered: Panel estimation of an agricultural water demand function
NASA Astrophysics Data System (ADS)
Schoengold, Karina; Sunding, David L.; Moreno, Georgina
2006-09-01
Using panel data from a period of water rate reform, this paper estimates the price elasticity of irrigation water demand. Price elasticity is decomposed into the direct effect of water management and the indirect effect of water price on choice of output and irrigation technology. The model is estimated using an instrumental variables strategy to account for the endogeneity of technology and output choices in the water demand equation. Estimation results indicate that the price elasticity of agricultural water demand is -0.79, which is greater than that found in previous studies.
Generic Sensor Modeling Using Pulse Method
NASA Technical Reports Server (NTRS)
Helder, Dennis L.; Choi, Taeyoung
2005-01-01
Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savy, J.
New design and evaluation guidelines for department of energy facilities subjected to natural phenomena hazard, are being finalized. Although still in draft form at this time, the document describing those guidelines should be considered to be an update of previously available guidelines. The recommendations in the guidelines document mentioned above, and simply referred to as the guidelines'' thereafter, are based on the best information at the time of its development. In particular, the seismic hazard model for the Princeton site was based on a study performed in 1981 for Lawrence Livermore National Laboratory (LLNL), which relied heavily on the resultsmore » of the NRC's Systematic Evaluation Program and was based on a methodology and data sets developed in 1977 and 1978. Considerable advances have been made in the last ten years in the domain of seismic hazard modeling. Thus, it is recommended to update the estimate of the seismic hazard at the DOE sites whenever possible. The major differences between previous estimates and the ones proposed in this study for the PPPL are in the modeling of the strong ground motion at the site, and the treatment of the total uncertainty in the estimates to include knowledge uncertainty, random uncertainty, and expert opinion diversity as well. 28 refs.« less
Nurses wanted Is the job too harsh or is the wage too low?
Di Tommaso, M L; Strøm, S; Saether, E M
2009-05-01
When entering the job market, nurses choose among different kind of jobs. Each of these jobs is characterized by wage, sector (primary care or hospital) and shift (daytime work or shift). This paper estimates a multi-sector-job-type random utility model of labor supply on data for Norwegian registered nurses (RNs) in 2000. The empirical model implies that labor supply is rather inelastic; 10% increase in the wage rates for all nurses is estimated to yield 3.3% increase in overall labor supply. This modest response shadows for much stronger inter-job-type responses. Our approach differs from previous studies in two ways: First, to our knowledge, it is the first time that a model of labor supply for nurses is estimated taking explicitly into account the choices that RN's have regarding work place and type of job. Second, it differs from previous studies with respect to the measurement of the compensations for different types of work. So far, it has been focused on wage differentials. But there are more attributes of a job than the wage. Based on the estimated random utility model we therefore calculate the expected value of compensation that makes a utility maximizing agent indifferent between types of jobs, here between shift work and daytime work. It turns out that Norwegian nurses working shifts may be willing to work shift relative to daytime work for a lower wage than the current one.
New Estimates of Incidence of Encephalitis in England
Cousens, Simon; Davies, Nicholas W.S.; Crowcroft, Natasha S.; Thomas, Sara L.
2013-01-01
Encephalitis causes high rates of illness and death, yet its epidemiology remains poorly understood. To improve incidence estimates in England and inform priority setting and treatment and prevention strategies, we used hospitalization data to estimate incidence of infectious and noninfectious encephalitis during 2005–2009. Hospitalization data were linked to a dataset of extensively investigated cases of encephalitis from a prospective study, and capture–recapture models were applied. Incidence was estimated from unlinked hospitalization data as 4.32 cases/100,000 population/year. Capture–recapture models gave a best estimate of encephalitis incidence of 5.23 cases/100,000/year, although the models’ indicated incidence could be as high as 8.66 cases/100,000/year. This analysis indicates that the incidence of encephalitis in England is considerably higher than previously estimated. Therefore, encephalitis should be a greater priority for clinicians, researchers, and public health officials. PMID:23969035
Spatial-altitudinal and temporal variation of Degree Day Factors (DDFs) in the Upper Indus Basin
NASA Astrophysics Data System (ADS)
Khan, Asif; Attaullah, Haleema; Masud, Tabinda; Khan, Mujahid
2017-04-01
Melt contribution from snow and ice in the Hindukush-Karakoram-Himalayan (HKH) region could account for more than 80% of annual river flows in the Upper Indus Basin (UIB). Increase or decrease in precipitation, energy input and glacier reserves can significantly affect water resources of this region. Therefore improved hydrological modelling and accurate future water resources prediction are vital for food production and hydro-power generation for millions of people living downstream, and are intensively needed. In mountain regions Degree Day Factors (DDFs) significantly vary on spatial and altitudinal basis, and are primary inputs of temperature-based hydrological modelling. However previous studies have used different DDFs as calibration parameters without due attention to the physical meaning of the values employed, and these estimates possess significant variability and uncertainty. This study provides estimates of DDFs for various altitudinal zones in the UIB at sub-basin level. Snow, clean ice and ice with debris cover bear different melt rates (or DDFs), therefore areally-averaged DDFs based on snow, clean and debris-covered ice classes in various altitudinal zones have been estimated for all sub-basins of the UIB. Zonal estimates of DDFs in the current study are significantly different from earlier adopted DDFs, hence suggest a revisit of previous hydrological modelling studies. DDFs presented in current study have been validated by using Snowmelt Runoff Model (SRM) in various sub-basins with good Nash Sutcliffe coefficients (R2 > 0.85) and low volumetric errors (Dv<10%). DDFs and methods provided in the current study can be used in future improved hydrological modelling and to provide accurate predictions of future river flows changes. The methodology used for estimation of DDFs is robust, and can be adopted to produce such estimates in other regions of the, particularly in the nearby other HKH basins.
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
Oceanic Fluxes of Mass, Heat and Freshwater: A Global Estimate and Perspective
NASA Technical Reports Server (NTRS)
MacDonald, Alison Marguerite
1995-01-01
Data from fifteen globally distributed, modern, high resolution, hydrographic oceanic transects are combined in an inverse calculation using large scale box models. The models provide estimates of the global meridional heat and freshwater budgets and are used to examine the sensitivity of the global circulation, both inter and intra-basin exchange rates, to a variety of external constraints provided by estimates of Ekman, boundary current and throughflow transports. A solution is found which is consistent with both the model physics and the global data set, despite a twenty five year time span and a lack of seasonal consistency among the data. The overall pattern of the global circulation suggested by the models is similar to that proposed in previously published local studies and regional reviews. However, significant qualitative and quantitative differences exist. These differences are due both to the model definition and to the global nature of the data set.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Li, Hailong; Xiao, Kai; Wang, Xuejing; Lu, Xiaoting; Zhang, Meng; An, An; Qu, Wenjing; Wan, Li; Zheng, Chunmiao; Wang, Xusheng; Jiang, Xiaowei
2017-10-01
Radium and radon mass balance models have been widely used to quantify submarine groundwater discharge (SGD) in the coastal areas. However, the losses of radium or radon in seawater caused by recirculated saline groundwater discharge (RSGD) are ignored in most of the previous studies for tracer-based models and this can lead to an underestimation of SGD. Here we present an improved method which considers the losses of tracers caused by RSGD to enhance accuracy in estimating SGD and SGD-associated material loadings. Theoretical analysis indicates that neglecting the losses of tracers induced by RSGD would underestimate the SGD by a percentage approximately equaling the tracer activity ratio of nearshore seawater to groundwater. The data analysis of previous typical case studies shows that the existing old models underestimated the SGD by 1.9-93%, with an average of 32.2%. The method is applied in Jiaozhou Bay (JZB), North China, which is experiencing significant environmental pollution. The SGD flux into JZB estimated by the improved method is ˜1.44 and 1.34 times of that estimated by the old method for 226Ra mass balance model and 228Ra mass balance model, respectively. Both SGD and RSGD fluxes are significantly higher than the discharge rate of Dagu River (the largest one running into JZB). The fluxes of nutrients and metals through SGD are comparable to or even higher than those from local rivers, which indicates that SGD is an important source of chemicals into JZB and has important impact on marine ecological system.
Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel
2013-01-01
Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the saggital otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any study relying on age estimation. PMID:23637773
Improving Estimation of Ground Casualty Risk From Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, Chris L.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
An Admittance Survey of Large Volcanoes on Venus: Implications for Volcano Growth
NASA Technical Reports Server (NTRS)
Brian, A. W.; Smrekar, S. E.; Stofan, E. R.
2004-01-01
Estimates of the thickness of the venusian crust and elastic lithosphere are important in determining the rheological and thermal properties of Venus. These estimates offer insights into what conditions are needed for certain features, such as large volcanoes and coronae, to form. Lithospheric properties for much of the large volcano population on Venus are not well known. Previous studies of elastic thickness (Te) have concentrated on individual or small groups of edifices, or have used volcano models and fixed values of Te to match with observations of volcano morphologies. In addition, previous studies use different methods to estimate lithospheric parameters meaning it is difficult to compare their results. Following recent global studies of the admittance signatures exhibited by the venusian corona population, we performed a similar survey into large volcanoes in an effort to determine the range of lithospheric parameters shown by these features. This survey of the entire large volcano population used the same method throughout so that all estimates could be directly compared. By analysing a large number of edifices and comparing our results to observations of their morphology and models of volcano formation, we can help determine the controlling parameters that govern volcano growth on Venus.
Improving Estimation of Ground Casualty Risk from Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, C.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
Estimating workload using EEG spectral power and ERPs in the n-back task
NASA Astrophysics Data System (ADS)
Brouwer, Anne-Marie; Hogervorst, Maarten A.; van Erp, Jan B. F.; Heffelaar, Tobias; Zimmerman, Patrick H.; Oostenveld, Robert
2012-08-01
Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.
Household water use and conservation models using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.
2013-04-01
The increased availability of water end use measurement studies allows for more mechanistic and detailed approaches to estimating household water demand and conservation potential. This study uses, probability distributions for parameters affecting water use estimated from end use studies and randomly sampled in Monte Carlo iterations to simulate water use in a single-family residential neighborhood. This model represents existing conditions and is calibrated to metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.
Development of reliable pavement models.
DOT National Transportation Integrated Search
2011-05-01
The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...
Reconstructing the hidden states in time course data of stochastic models.
Zimmer, Christoph
2015-11-01
Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Ikegami, Tsuyoshi; Ganesh, Gowrishankar
2017-01-01
The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants' ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert's abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert's self-estimation is explained only by considering a change in the individual's forward model, showing that an improvement in an expert's ability to predict outcomes of observed actions affects the individual's forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions.
Manning, Andrew H.; Solomon, D. Kip
2005-01-01
The subsurface transfer of water from a mountain block to an adjacent basin (mountain block recharge (MBR)) is a commonly invoked mechanism of recharge to intermountain basins. However, MBR estimates are highly uncertain. We present an approach to characterize bulk fluid circulation in a mountain block and thus MBR that utilizes environmental tracers from the basin aquifer. Noble gas recharge temperatures, groundwater ages, and temperature data combined with heat and fluid flow modeling are used to identify clearly improbable flow regimes in the southeastern Salt Lake Valley, Utah, and adjacent Wasatch Mountains. The range of possible MBR rates is reduced by 70%. Derived MBR rates (5.5–12.6 × 104 m3 d−1) are on the same order of magnitude as previous large estimates, indicating that significant MBR to intermountain basins is plausible. However, derived rates are 50–100% of the lowest previous estimate, meaning total recharge is probably less than previously thought.
NASA Astrophysics Data System (ADS)
Zhou, Yu; Walker, Richard T.; Elliott, John R.; Parsons, Barry
2016-04-01
Fault dips are usually measured from outcrops in the field or inferred through geodetic or seismological modeling. Here we apply the classic structural geology approach of calculating dip from a fault's 3-D surface trace using recent, high-resolution topography. A test study applied to the 2010 El Mayor-Cucapah earthquake shows very good agreement between our results and those previously determined from field measurements. To obtain a reliable estimate, a fault segment ≥120 m long with a topographic variation ≥15 m is suggested. We then applied this method to the 2013 Balochistan earthquake, getting dips similar to previous estimates. Our dip estimates show a switch from north to south dipping at the southern end of the main trace, which appears to be a response to local extension within a stepover. We suggest that this previously unidentified geometrical complexity may act as the endpoint of earthquake ruptures for the southern end of the Hoshab fault.
Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region
NASA Astrophysics Data System (ADS)
Choi, P. H.; Bang, E.; Lee, J.
2016-12-01
Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
PoMo: An Allele Frequency-Based Approach for Species Tree Estimation
De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin
2015-01-01
Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2017-07-01
This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
Polar bears in the Beaufort Sea: A 30-year mark-recapture case history
Amstrup, Steven C.; McDonald, T.L.; Stirling, I.
2001-01-01
Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Inner Structure in the TW Hya Circumstellar Disk
NASA Astrophysics Data System (ADS)
Akeson, Rachel L.; Millan-Gabet, R.; Ciardi, D.; Boden, A.; Sargent, A.; Monnier, J.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.
2011-05-01
TW Hya is a nearby (50 pc) young stellar object with an estimated age of 10 Myr and signs of active accretion. Previous modeling of the circumstellar disk has shown that the inner disk contains optically thin material, placing this object in the class of "transition disks". We present new near-infrared interferometric observations of the disk material and use these data, as well as previously published, spatially resolved data at 10 microns and 7 mm, to constrain disk models based on a standard flared disk structure. Our model demonstrates that the constraints imposed by the spatially resolved data can be met with a physically plausible disk but this requires a disk containing not only an inner gap in the optically thick disk as previously suggested, but also some optically thick material within this gap. Our model is consistent with the suggestion by previous authors of a planet with an orbital radius of a few AU. This work was conducted at the NASA Exoplanet Science Institute, California Institute of Technology.
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
NASA Astrophysics Data System (ADS)
Peltier, W. R.; Argus, D.; Drummond, R.; Moore, A. W.
2012-12-01
We compare, on a global basis, estimates of site velocity against predictions of the newly constructed postglacial rebound model ICE-6G (VM5a). This model is fit to observations of North American postglacial rebound thereby demonstrating that the ice sheet at last glacial maximum must have been, relative to ICE-5G,thinner in southern Manitoba, thinner near Yellowknife (northwest Territories), thicker in eastern and southern Quebec, and thicker along the British Columbia-Alberta border. The GPS based estimates of site velocity that we employ are more accurate than were previously available because they are based on GPS estimates of position as a function of time determined by incorporating satellite phase center variations [Desai et al. 2011]. These GPS estimates are constraining postglacial rebound in North America and Europe more tightly than ever before. In particular, given the high density of GPS sites in North America, and the fact that the velocity of the mass center (CM) of Earth is also more tightly constrained, the new model much more strongly constrains both the lateral extent of the proglacial forebulge and the rate at which this peripheral bulge (that was emplaced peripheral to the late Pleistocence Laurentia ice sheet) is presently collapsing. This fact proves to be important to the more accurate inference of the current rate of ice loss from both Greenland and Alaska based upon the time dependent gravity observations being provided by the GRACE satellite system. In West Antarctica we have also been able to significantly revise the previously prevalent ICE-5G deglaciation history so as to enable its predictions to be optimally consistent with GPS site velocities determined by connecting campaign WAGN measurements to those provided by observations from the permanent ANET sites. Ellsworth Land (south of the Antarctic peninsula), is observed to be rising at 6 ±3 mm/yr according to our latest analyses; the Ellsworth mountains themselves are observed to be rising at 5 ±4 mm/yr; Palmer Land is observed to be rising at 3 ±3 mm/yr. The predictions of the ICE-5G (VM2) model and those of the postglacial rebound component of the model of Simons, Ivins, and James [2010] had predicted uplift to be significantly faster than observed in this region, as previously documented in Argus et al [2011]. From a global perspective the new ICE-6G (VM5a) model is also a further significant improvement on the previous ICE-5G (VM2) model in that the degree two and order one components of its predicted time dependence of geoid height are tightly constrained by the recent inferences of Roy and Peltier [2011] of the post-GRACE-launch values of the speed and direction of true polar wander and the non-tidal acceleration of the lod. .
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
A revised econometric model of the domestic pallet market
Albert T. Schuler; Walter B. Wallin
1983-01-01
The purpose of this revised model is to project estimates of consumption and price of wooden pallets in the short term. This model differs from previous ones developed by Schuler and Wallin (1979 and 1980) in the following respects: The structure of the supply side of the market is more realistically identified (from an economic theory point of view) by including...
Bhat; Bergstrom; Teasley; Bowker; Cordell
1998-01-01
/ This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Optimized retrievals of precipitable water from the VAS 'split window'
NASA Technical Reports Server (NTRS)
Chesters, Dennis; Robinson, Wayne D.; Uccellini, Louis W.
1987-01-01
Precipitable water fields have been retrieved from the VISSR Atmospheric Sounder (VAS) using a radiation transfer model for the differential water vapor absorption between the 11- and 12-micron 'split window' channels. Previous moisture retrievals using only the split window channels provided very good space-time continuity but poor absolute accuracy. This note describes how retrieval errors can be significantly reduced from plus or minus 0.9 to plus or minus 0.6 gm/sq cm by empirically optimizing the effective air temperature and absorption coefficients used in the two-channel model. The differential absorption between the VAS 11- and 12-micron channels, empirically estimated from 135 colocated VAS-RAOB observations, is found to be approximately 50 percent smaller than the theoretical estimates. Similar discrepancies have been noted previously between theoretical and empirical absorption coefficients applied to the retrieval of sea surface temperatures using radiances observed by VAS and polar-orbiting satellites. These discrepancies indicate that radiation transfer models for the 11-micron window appear to be less accurate than the satellite observations.
Attention in western Nevada: Preliminary results from earthquake and explosion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hough, S.E.; Anderson, J.G.; Patton, H.J.
1989-02-01
We present preliminary results from a study of the attenuation of regional seismic waves at frequencies between 1 and 15 Hz and distances up to 250 km in Western Nevada. Following the methods of Anderson and Hough (1984) and Hough et al. (1988), we parameterize the asymptote of the high frequency acceleration spectrum by the two-parameter model. We relate the model parameters to a two-layer model for Q/sub i/ and Q/sub d/, the freuqency-independent and the frequency dependent components of the quality factor. We compare our results to previously published Q studies in the Basin and Range and find thatmore » our estimate of total Q, Q/sub t/, in the shallow crust is consistent with shear wave Q at close distances with previous estimates of coda Q (Singh and Hermann, 1983) and LgQ (Chavez and Priestley, 1986), suggesting that both coda Q and LgQ are insensitive to near-surface contributions to attenuation.« less
White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J
2002-01-01
A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Peter
An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.
Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab
2014-01-01
Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.
Subduction in an Eddy-Resolving State Estimate of the Northeast Atlantic Ocean
NASA Technical Reports Server (NTRS)
Gebbie, Geoffrey
2004-01-01
Are eddies an important contributor to subduction in the eastern subtropical gyre? Here, an adjoint model is used to combine a regional, eddy-resolving numerical model with observations to produce a state estimate of the ocean circulation. The estimate is a synthesis of a variety of in- situ observations from the Subduction Experiment, TOPEX/POSEIDON altimetry, and the MTI General Circulation Model. The adjoint method is successful because the Northeast Atlantic Ocean is only weakly nonlinear. The state estimate provides a physically-interpretable, eddy-resolving information source to diagnose subduction. Estimates of eddy subduction for the eastern subtropical gyre of the North Atlantic are larger than previously calculated from parameterizations in coarse-resolution models. Furthermore, eddy subduction rates have typical magnitudes of 15% of the total subduction rate. Eddies contribute as much as 1 Sverdrup to water-mass transformation, and hence subduction, in the North Equatorial Current and the Azores Current. The findings of this thesis imply that the inability to resolve or accurately parameterize eddy subduction in climate models would lead to an accumulation of error in the structure of the main thermocline, even in the relatively-quiescent eastern subtropical gyre.
Latest NASA Instrument Cost Model (NICM): Version VI
NASA Technical Reports Server (NTRS)
Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary
2014-01-01
The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.
Revised budget for the oceanic uptake of anthropogenic carbon dioxide
Sarmiento, J.L.; Sundquist, E.T.
1992-01-01
TRACER-CALIBRATED models of the total uptake of anthropogenic CO2 by the world's oceans give estimates of about 2 gigatonnes carbon per year1, significantly larger than a recent estimate2 of 0.3-0.8 Gt C yr-1 for the synoptic air-to-sea CO2 influx. Although both estimates require that the global CO2 budget must be balanced by a large unknown terrestrial sink, the latter estimate implies a much larger terrestrial sink, and challenges the ocean model calculations on which previous CO2 budgets were based. The discrepancy is due in part to the net flux of carbon to the ocean by rivers and rain, which must be added to the synoptic air-to-sea CO2 flux to obtain the total oceanic uptake of anthropogenic CO2. Here we estimate the magnitude of this correction and of several other recently proposed adjustments to the synoptic air-sea CO2 exchange. These combined adjustments minimize the apparent inconsistency, and restore estimates of the terrestrial sink to values implied by the modelled oceanic uptake.
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
An economic analysis of pregnancy resolution in Virginia: specific as to race and residence.
Liu, G G
1995-01-01
This study analyses an economic model of pregnancy resolution; that is, a model of the choice by a pregnant woman to abort her fetus or carry it to term. This analysis, using an analytical model derived from the household utility framework, adds to previous research by presenting race- and residence-specific estimates of how individual characteristics, history of abortion, and the community-based factors determine women's choices of giving birth vs. aborting. The main data for estimating the model were drawn from the 1984 vital statistics of all induced abortions and live births in the Commonwealth of Virginia. The major findings indicate that low parental education, high maternal age, previous early abortions, and the availability of abortion providers all significantly reduce the probability of choosing the live birth option. Married status and the availability of family planning clinics significantly increase the probability of the live birth option. The findings also suggest that women's choices between abortion and live birth vary substantially with race (White vs. Black) and residential (urban vs. rural) location.
Beatrix Jones; Gary D. Grossman; Daniel C.I. Walsh; Brady A. Porter; John C. Avise; Anthony C. Flumera
2007-01-01
Understanding how variation in reproductive success is related to demography is a critical component in understanding the life history of an organism. Parentage analysis using molecular markers can be used to estimate the reproductive success of different groups of individuals in natural populations. Previous models have been developed for cases where offspring are...
Progress report on daily flow-routing simulation for the Carson River, California and Nevada
Hess, G.W.
1996-01-01
A physically based flow-routing model using Hydrological Simulation Program-FORTRAN (HSPF) was constructed for modeling streamflow in the Carson River at daily time intervals as part of the Truckee-Carson Program of the U.S. Geological Survey (USGS). Daily streamflow data for water years 1978-92 for the mainstem river, tributaries, and irrigation ditches from the East Fork Carson River near Markleeville and West Fork Carson River at Woodfords down to the mainstem Carson River at Fort Churchill upstream from Lahontan Reservoir were obtained from several agencies and were compiled into a comprehensive data base. No previous physically based flow-routing model of the Carson River has incorporated multi-agency streamflow data into a single data base and simulated flow at a daily time interval. Where streamflow data were unavailable or incomplete, hydrologic techniques were used to estimate some flows. For modeling purposes, the Carson River was divided into six segments, which correspond to those used in the Alpine Decree that governs water rights along the river. Hydraulic characteristics were defined for 48 individual stream reaches based on cross-sectional survey data obtained from field surveys and previous studies. Simulation results from the model were compared with available observed and estimated streamflow data. Model testing demonstrated that hydraulic characteristics of the Carson River are adequately represented in the models for a range of flow regimes. Differences between simulated and observed streamflow result mostly from inadequate data characterizing inflow and outflow from the river. Because irrigation return flows are largely unknown, irrigation return flow percentages were used as a calibration parameter to minimize differences between observed and simulated streamflows. Observed and simulated streamflow were compared for daily periods for the full modeled length of the Carson River and for two major subreaches modeled with more detailed input data. Hydrographs and statistics presented in this report describe these differences. A sensitivity analysis of four estimated components of the hydrologic system evaluated which components were significant in the model. Estimated ungaged tributary streamflow is not a significant component of the model during low runoff, but is significant during high runoff. The sensitivity analysis indicates that changes in the estimated irrigation diversion and estimated return flow creates a noticeable change in the statistics. The modeling for this study is preliminary. Results of the model are constrained by current availability and accuracy of observed hydrologic data. Several inflows and outflows of the Carson River are not described by time-series data and therefore are not represented in the model.
NASA Technical Reports Server (NTRS)
Padfield, G. D.; Duval, R. K.
1982-01-01
A set of results on rotorcraft system identification is described. Flight measurements collected on an experimental Puma helicopter are reviewed and some notable characteristics highlighted. Following a brief review of previous work in rotorcraft system identification, the results of state estimation and model structure estimation processes applied to the Puma data are presented. The results, which were obtained using NASA developed software, are compared with theoretical predictions of roll, yaw and pitching moment derivatives for a 6 degree of freedom model structure. Anomalies are reported. The theoretical methods used are described. A framework for reduced order modelling is outlined.
Comparison of MRI-based estimates of articular cartilage contact area in the tibiofemoral joint.
Henderson, Christopher E; Higginson, Jill S; Barrance, Peter J
2011-01-01
Knee osteoarthritis (OA) detrimentally impacts the lives of millions of older Americans through pain and decreased functional ability. Unfortunately, the pathomechanics and associated deviations from joint homeostasis that OA patients experience are not well understood. Alterations in mechanical stress in the knee joint may play an essential role in OA; however, existing literature in this area is limited. The purpose of this study was to evaluate the ability of an existing magnetic resonance imaging (MRI)-based modeling method to estimate articular cartilage contact area in vivo. Imaging data of both knees were collected on a single subject with no history of knee pathology at three knee flexion angles. Intra-observer reliability and sensitivity studies were also performed to determine the role of operator-influenced elements of the data processing on the results. The method's articular cartilage contact area estimates were compared with existing contact area estimates in the literature. The method demonstrated an intra-observer reliability of 0.95 when assessed using Pearson's correlation coefficient and was found to be most sensitive to changes in the cartilage tracings on the peripheries of the compartment. The articular cartilage contact area estimates at full extension were similar to those reported in the literature. The relationships between tibiofemoral articular cartilage contact area and knee flexion were also qualitatively and quantitatively similar to those previously reported. The MRI-based knee modeling method was found to have high intra-observer reliability, sensitivity to peripheral articular cartilage tracings, and agreeability with previous investigations when using data from a single healthy adult. Future studies will implement this modeling method to investigate the role that mechanical stress may play in progression of knee OA through estimation of articular cartilage contact area.
N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models
Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.
2018-01-01
Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Estimating parameters of hidden Markov models based on marked individuals: use of robust design data
Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun
2012-01-01
Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
On the Measurement of Elemental Abundance Ratios in Inner Galaxy H II Regions
NASA Technical Reports Server (NTRS)
Simpson, Janet P.; Rubin, Robert H.; Colgan, Sean W. J.; Erickson, Edwin F.; Haas, Michael R.
2004-01-01
Although abundance gradients in the Milky Way Galaxy certainly exist, details remain uncertain, particularly in the inner Galaxy, where stars and H II regions in the Galactic plane are obscured optically. In this paper we revisit two previously studied, inner Galaxy H II regions: G333.6-0.2 and W43. We observed three new positions in G333.6-0.2 with the Kuiper Airborne Observatory and reobserved the central position with the Infrared Space Observatory's Long Wavelength Spectrometer in far-infrared lines of S++, N++, N+, and O++. We also added the N+ lines at 122 and 205 microns to the suite of lines measured in W43 by Simpson et al.. The measured electron densities range from approx. 40 to over 4000 per cu cm in a single HII region, indicating that abundance analyses must consider density variations, since the critical densities of the observed lines range from 40 to 9000 per cu cm. We propose a method to handle density variations and make new estimates of the S/H and N/H abundance ratios. We find that our sulfur abundance estimates for G333.6-0.2 and W43 agree with the S/H abundance ratios expected for the gradient previously reported by Simpson et al., with the S/H values revised to be smaller owing to changes in collisional excitation cross sections. The estimated N/H, S/H, and N/S ratios are the most reliable because of their small corrections for unseen ionization states (< or approx. 10%). The estimated N/S ratios for the two sources are smaller than what would be calculated from the N/H and S/H ratios in our previous paper. If all low excitation H II regions had similar changes to their N/S ratios as a result of adding measurements of N+ to previous measurements of N++, there would be no or only a very small gradient in N/S. This is interesting because nitrogen is considered to be a secondary element and sulfur is a primary element in galactic chemical evolution calculations. We compute models of the two H II regions to estimate corrections for the other unseen ionization states. We find, with large uncertainties, that oxygen does not, have a high abundance, with the result that the N/O ratio is as high (approx. 0.35) as previously reported. The reasons for the uncertainty in the ionization corrections for oxygen are both the non-uniqueness of the H II region models and the sensitivity of these models to different input atomic data and stellar atmosphere models. We discuss these predictions and conclude that only a few of the latest models adequately reproduce H II region observations, including the well-known, relatively-large observed Ne++/O++ ratios in low- and moderate-excitation H II regions.
Elimination Rates of Dioxin Congeners in Former Chlorophenol Workers from Midland, Michigan
Collins, James J.; Bodner, Kenneth M.; Wilken, Michael; Bodnar, Catherine M.
2012-01-01
Background: Exposure reconstructions and risk assessments for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and other dioxins rely on estimates of elimination rates. Limited data are available on elimination rates for congeners other than TCDD. Objectives: We estimated apparent elimination rates using a simple first-order one-compartment model for selected dioxin congeners based on repeated blood sampling in a previously studied population. Methods: Blood samples collected from 56 former chlorophenol workers in 2004–2005 and again in 2010 were analyzed for dioxin congeners. We calculated the apparent elimination half-life in each individual for each dioxin congener and examined factors potentially influencing elimination rates and the impact of estimated ongoing background exposures on rate estimates. Results: Mean concentrations of all dioxin congeners in the sampled participants declined between sampling times. Median apparent half-lives of elimination based on changes in estimated mass in the body were generally consistent with previous estimates and ranged from 6.8 years (1,2,3,7,8,9-hexachlorodibenzo-p-dioxin) to 11.6 years (pentachlorodibenzo-p-dioxin), with a composite half-life of 9.3 years for TCDD toxic equivalents. None of the factors examined, including age, smoking status, body mass index or change in body mass index, initial measured concentration, or chloracne diagnosis, was consistently associated with the estimated elimination rates in this population. Inclusion of plausible estimates of ongoing background exposures decreased apparent half-lives by approximately 10%. Available concentration-dependent toxicokinetic models for TCDD underpredicted observed elimination rates for concentrations < 100 ppt. Conclusions: The estimated elimination rates from this relatively large serial sampling study can inform occupational and environmental exposure and serum evaluations for dioxin compounds. PMID:23063871
James, Eric P.; Benjamin, Stanley G.; Marquis, Melinda
2016-10-28
A new gridded dataset for wind and solar resource estimation over the contiguous United States has been derived from hourly updated 1-h forecasts from the National Oceanic and Atmospheric Administration High-Resolution Rapid Refresh (HRRR) 3-km model composited over a three-year period (approximately 22 000 forecast model runs). The unique dataset features hourly data assimilation, and provides physically consistent wind and solar estimates for the renewable energy industry. The wind resource dataset shows strong similarity to that previously provided by a Department of Energy-funded study, and it includes estimates in southern Canada and northern Mexico. The solar resource dataset represents anmore » initial step towards application-specific fields such as global horizontal and direct normal irradiance. This combined dataset will continue to be augmented with new forecast data from the advanced HRRR atmospheric/land-surface model.« less
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
Angelaki, Dora E
2017-01-01
Brainstem and cerebellar neurons implement an internal model to accurately estimate self-motion during externally generated (‘passive’) movements. However, these neurons show reduced responses during self-generated (‘active’) movements, indicating that predicted sensory consequences of motor commands cancel sensory signals. Remarkably, the computational processes underlying sensory prediction during active motion and their relationship to internal model computations during passive movements remain unknown. We construct a Kalman filter that incorporates motor commands into a previously established model of optimal passive self-motion estimation. The simulated sensory error and feedback signals match experimentally measured neuronal responses during active and passive head and trunk rotations and translations. We conclude that a single sensory internal model can combine motor commands with vestibular and proprioceptive signals optimally. Thus, although neurons carrying sensory prediction error or feedback signals show attenuated modulation, the sensory cues and internal model are both engaged and critically important for accurate self-motion estimation during active head movements. PMID:29043978
Lucia, Alejandro; Juan, Laura W; Zerba, Eduardo N; Harrand, Leonel; Marcó, Martín; Masuh, Hector M
2012-05-01
The aim of this work is to validate the pre-existing models that relate the larvicidal and adulticidal activities of the Eucalyptus essential oils on Aedes aegypti. Previous works at our laboratory described that the larvicidal activity of Eucalyptus essential oils can be estimated from the relative concentration of two main components (p-cymene and 1,8-cineole) and that the adulticidal effectiveness can be explained, to a great extent, by the presence of large amounts of the component 1,8-cineole in it. In general, the results show that the higher adulticidal effect of essential oils the lower their larvicidal activity. Fresh leaves was harvested and distilled. Once the essential oil was obtained, the chemical composition was analysed, evaluating the biological activity of 15 species of the genus Eucalyptus (Eucalyptus badjensis Beuzev and Welch, Eucalyptus badjensis × nitens, Eucalyptus benthamii var Benthamii Maiden and Cambage, Eucalyptus benthamii var dorrigoensis Maiden and Cambage, Eucalyptus botryoides Smith, Eucalyptus dalrympleana Maiden, Eucalyptus fastigata Deane and Maiden, Eucalyptus nobilis L.A.S. Johnson and K.D.Hill, Eucalyptus polybractea R. Baker, Eucalyptus radiata ssp radiata Sieber ex Spreng, Eucalyptus resinifera Smith, Eucalyptus robertsonii Blakely, Eucalyptus robusta Smith, Eucalyptus rubida Deane and Maiden, Eucalyptus smithii R. Baker). Essential oils of these plant species were used for the validation of equations from preexistent models, in which observed and estimated values of the biological activity were compared. The regression analysis showed a strong validation of the models, re-stating the trends previously observed. The models were expressed as follows: A, fumigant activity [KT(50(min)) = 10.65-0.076 × 1,8-cineole (%)](p < 0.01; F, 397; R (2), 0.79); B, larval mortality (%)((40 ppm)) = 103.85 + 0.482 × p-cymene (%) - 0.363 × α-pinene (%) - 1.07 × 1,8-cineole (%) (p < 0.01; F, 300; R (2), 0.90). These results confirmed the importance of the mayor components in the biological activity of Eucalyptus essential oils on A. aegypti. However, it is worth mentioning that two or three species differ in the data estimated by the models, and these biological activity results coincide with the presence of minor differential components in the essential oils. According to what was previously mentioned, it can be inferred that the model is able to estimate very closely the biological activity of essential oils of Eucalyptus on A. aegypti.
Tyne, Julian A.; Pollock, Kenneth H.; Johnston, David W.; Bejder, Lars
2014-01-01
Reliable population estimates are critical to implement effective management strategies. The Hawai’i Island spinner dolphin (Stenella longirostris) is a genetically distinct stock that displays a rigid daily behavioural pattern, foraging offshore at night and resting in sheltered bays during the day. Consequently, they are exposed to frequent human interactions and disturbance. We estimated population parameters of this spinner dolphin stock using a systematic sampling design and capture–recapture models. From September 2010 to August 2011, boat-based photo-identification surveys were undertaken monthly over 132 days (>1,150 hours of effort; >100,000 dorsal fin images) in the four main resting bays along the Kona Coast, Hawai’i Island. All images were graded according to photographic quality and distinctiveness. Over 32,000 images were included in the analyses, from which 607 distinctive individuals were catalogued and 214 were highly distinctive. Two independent estimates of the proportion of highly distinctive individuals in the population were not significantly different (p = 0.68). Individual heterogeneity and time variation in capture probabilities were strongly indicated for these data; therefore capture–recapture models allowing for these variations were used. The estimated annual apparent survival rate (product of true survival and permanent emigration) was 0.97 SE±0.05. Open and closed capture–recapture models for the highly distinctive individuals photographed at least once each month produced similar abundance estimates. An estimate of 221±4.3 SE highly distinctive spinner dolphins, resulted in a total abundance of 631±60.1 SE, (95% CI 524–761) spinner dolphins in the Hawai’i Island stock, which is lower than previous estimates. When this abundance estimate is considered alongside the rigid daily behavioural pattern, genetic distinctiveness, and the ease of human access to spinner dolphins in their preferred resting habitats, this Hawai’i Island stock is likely more vulnerable to negative impacts from human disturbance than previously believed. PMID:24465917
Tyne, Julian A; Pollock, Kenneth H; Johnston, David W; Bejder, Lars
2014-01-01
Reliable population estimates are critical to implement effective management strategies. The Hawai'i Island spinner dolphin (Stenella longirostris) is a genetically distinct stock that displays a rigid daily behavioural pattern, foraging offshore at night and resting in sheltered bays during the day. Consequently, they are exposed to frequent human interactions and disturbance. We estimated population parameters of this spinner dolphin stock using a systematic sampling design and capture-recapture models. From September 2010 to August 2011, boat-based photo-identification surveys were undertaken monthly over 132 days (>1,150 hours of effort; >100,000 dorsal fin images) in the four main resting bays along the Kona Coast, Hawai'i Island. All images were graded according to photographic quality and distinctiveness. Over 32,000 images were included in the analyses, from which 607 distinctive individuals were catalogued and 214 were highly distinctive. Two independent estimates of the proportion of highly distinctive individuals in the population were not significantly different (p = 0.68). Individual heterogeneity and time variation in capture probabilities were strongly indicated for these data; therefore capture-recapture models allowing for these variations were used. The estimated annual apparent survival rate (product of true survival and permanent emigration) was 0.97 SE ± 0.05. Open and closed capture-recapture models for the highly distinctive individuals photographed at least once each month produced similar abundance estimates. An estimate of 221 ± 4.3 SE highly distinctive spinner dolphins, resulted in a total abundance of 631 ± 60.1 SE, (95% CI 524-761) spinner dolphins in the Hawai'i Island stock, which is lower than previous estimates. When this abundance estimate is considered alongside the rigid daily behavioural pattern, genetic distinctiveness, and the ease of human access to spinner dolphins in their preferred resting habitats, this Hawai'i Island stock is likely more vulnerable to negative impacts from human disturbance than previously believed.
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
Estimation of Fat-free Mass at Discharge in Preterm Infants Fed With Optimized Feeding Regimen.
Larcade, Julie; Pradat, Pierre; Buffin, Rachel; Leick-Courtois, Charline; Jourdes, Emilie; Picaud, Jean-Charles
2017-01-01
The purpose of the present study was to validate a previously calculated equation (E1) that estimates infant fat-free mass (FFM) at discharge using data from a population of preterm infants receiving an optimized feeding regimen. Preterm infants born before 33 weeks of gestation between April 2014 and November 2015 in the tertiary care unit of Croix-Rousse Hospital in Lyon, France, were included in the study. At discharge, FFM was assessed by air displacement plethysmography (PEA POD) and was compared with FFM estimated by E1. FFM was estimated using a multiple linear regression model. Data on 155 preterm infants were collected. There was a strong correlation between the FFM estimated by E1 and FFM assessed by the PEA POD (r = 0.939). E1, however, underestimated the FFM (average difference: -197 g), and this underestimation increased as FFM increased. A new, more predictive equation is proposed (r = 0.950, average difference: -12 g). Although previous estimation methods were useful for estimating FFM at discharge, an equation adapted to present populations of preterm infants with "modern" neonatal care and nutritional practices is required for accuracy.
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
Ganesh, Gowrishankar
2017-01-01
Abstract The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants’ ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert’s abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert’s self-estimation is explained only by considering a change in the individual’s forward model, showing that an improvement in an expert’s ability to predict outcomes of observed actions affects the individual’s forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions. PMID:29340300
Modeling, implementation, and validation of arterial travel time reliability.
DOT National Transportation Integrated Search
2013-11-01
Previous research funded by Florida Department of Transportation (FDOT) developed a method for estimating : travel time reliability for arterials. This method was not initially implemented or validated using field data. This : project evaluated and r...
Evaluation of Fast-Time Wake Models Using Denver 2006 Field Experiment Data
NASA Technical Reports Server (NTRS)
Ahmad, Nash’at N.; Pruis, Matthew J.
2015-01-01
The National Aeronautics and Space Administration conducted a series of wake vortex field experiments at Denver in 2003, 2005, and 2006. This paper describes the lidar wake vortex measurements and associated meteorological data collected during the 2006 deployment, and includes results of recent reprocessing of the lidar data using a new wake vortex algorithm and estimates of the atmospheric turbulence using a new algorithm to estimate eddy dissipation rate from the lidar data. The configuration and set-up of the 2006 field experiment allowed out-of-ground effect vortices to be tracked in lateral transport further than any previous campaign and thereby provides an opportunity to study long-lived wake vortices in moderate to low crosswinds. An evaluation of NASA's fast-time wake vortex transport and decay models using the dataset shows similar performance as previous studies using other field data.
Working times of elastomeric impression materials determined by dimensional accuracy.
Tan, E; Chai, J; Wozniak, W T
1996-01-01
The working times of five poly(vinyl siloxane) impression materials were estimated by evaluating the dimensional accuracy of stone dies of impressions of a standard model made at successive time intervals. The stainless steel standard model was represented by two abutments having known distances between landmarks in three dimensions. Three dimensions in the x-, y-, and z-axes of the stone dies were measured with a traveling microscope. A time interval was rejected as being within the working time if the percentage change of the resultant dies, in any dimension, was statistically different from those measured from stone dies from previous time intervals. The absolute dimensions of those dies from the rejected time interval also must have exceeded all those from previous time intervals. Results showed that the working times estimated with this method generally were about 30 seconds longer than those recommended by the manufacturers.
Worklife After Traumatic Spinal Cord Injury
Pflaum, Christopher; McCollister, George; Strauss, David J; Shavelle, Robert M; DeVivo, Michael J
2006-01-01
Objective: To develop predictive models to estimate worklife expectancy after spinal cord injury (SCI). Design: Inception cohort study. Setting: Model SCI Care Systems throughout the United States. Participants: 20,143 persons enrolled in the National Spinal Cord Injury Statistical Center database since 1973. Intervention: Not applicable. Main Outcome Measure: Postinjury employment rates and worklife expectancy. Results: Using logistic regression, we found a greater likelihood of being employed in any given year to be significantly associated with younger age, white race, higher education level, being married, having a nonviolent cause of injury, paraplegia, ASIA D injury, longer time postinjury, being employed at injury and during the previous postinjury year, higher general population employment rate, lower level of Social Security Disability Insurance benefits, and calendar years after the passage of the Americans with Disabilities Act. Conclusions: The likelihood of postinjury employment varies substantially among persons with SCI. Given favorable patient characteristics, worklife should be considerably higher than previous estimates. PMID:17044388
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Fournier, Auriel M. V.; Sullivan, Alexis R.; Bump, Joseph K.; Perkins, Marie; Shieldcastle, Mark C.; King, Sammy L.
2017-01-01
Stable hydrogen isotope (δD) methods for tracking animal movement are widely used yet often produce low resolution assignments. Incorporating prior knowledge of abundance, distribution or movement patterns can ameliorate this limitation, but data are lacking for most species. We demonstrate how observations reported by citizen scientists can be used to develop robust estimates of species distributions and to constrain δD assignments.We developed a Bayesian framework to refine isotopic estimates of migrant animal origins conditional on species distribution models constructed from citizen scientist observations. To illustrate this approach, we analysed the migratory connectivity of the Virginia rail Rallus limicola, a secretive and declining migratory game bird in North America.Citizen science observations enabled both estimation of sampling bias and construction of bias-corrected species distribution models. Conditioning δD assignments on these species distribution models yielded comparably high-resolution assignments.Most Virginia rails wintering across five Gulf Coast sites spent the previous summer near the Great Lakes, although a considerable minority originated from the Chesapeake Bay watershed or Prairie Pothole region of North Dakota. Conversely, the majority of migrating Virginia rails from a site in the Great Lakes most likely spent the previous winter on the Gulf Coast between Texas and Louisiana.Synthesis and applications. In this analysis, Virginia rail migratory connectivity does not fully correspond to the administrative flyways used to manage migratory birds. This example demonstrates that with the increasing availability of citizen science data to create species distribution models, our framework can produce high-resolution estimates of migratory connectivity for many animals, including cryptic species. Empirical evidence of links between seasonal habitats will help enable effective habitat management, hunting quotas and population monitoring and also highlight critical knowledge gaps.
In-vivo detectability index: development and validation of an automated methodology
NASA Astrophysics Data System (ADS)
Smith, Taylor Brunton; Solomon, Justin; Samei, Ehsan
2017-03-01
The purpose of this study was to develop and validate a method to estimate patient-specific detectability indices directly from patients' CT images (i.e., "in vivo"). The method works by automatically extracting noise (NPS) and resolution (MTF) properties from each patient's CT series based on previously validated techniques. Patient images are thresholded into skin-air interfaces to form edge-spread functions, which are further binned, differentiated, and Fourier transformed to form the MTF. The NPS is likewise estimated from uniform areas of the image. These are combined with assumed task functions (reference function: 10 mm disk lesion with contrast of -15 HU) to compute detectability indices for a non-prewhitening matched filter model observer predicting observer performance. The results were compared to those from a previous human detection study on 105 subtle, hypo-attenuating liver lesions, using a two-alternative-forcedchoice (2AFC) method, over 6 dose levels using 16 readers. The in vivo detectability indices estimated for all patient images were compared to binary 2AFC outcomes with a generalized linear mixed-effects statistical model (Probit link function, linear terms only, no interactions, random term for readers). The model showed that the in vivo detectability indices were strongly predictive of 2AFC outcomes (P < 0.05). A linear comparison between the human detection accuracy and model-predicted detection accuracy (for like conditions) resulted in Pearson and Spearman correlations coefficients of 0.86 and 0.87, respectively. These data provide evidence that the in vivo detectability index could potentially be used to automatically estimate and track image quality in a clinical operation.
Estimation of maximal oxygen uptake by bioelectrical impedance analysis.
Stahn, Alexander; Terblanche, Elmarie; Grunert, Sven; Strobel, Günther
2006-02-01
Previous non-exercise models for the prediction of maximal oxygen uptake VO(2max) have failed to accurately discriminate cardiorespiratory fitness within large cohorts. The aim of the present study was to evaluate the feasibility of a completely indirect method for predicting VO(2max) that was based on bioelectrical impedance analysis (BIA) in 66 young, healthy fit men and women. Multiple, stepwise regression analysis was used to determine the usefulness of BIA and additional covariates to estimate VO(2max) (ml min(-1)). BIA was highly correlated to VO(2max) (r = 0.914; P < 0.001) and entered the regression equation first. The inclusion of gender and a physical activity rating further improved the model which accounted for 88% of the variance in VO(2max) and resulted in a relative standard error of the estimate (SEE) of 7.2%. Substantial agreement between the methods was confirmed by the fact that nearly all the differences were within +/-2 SD. Furthermore, in contrast to previously published non-exercise models, no trend of a reduction in prediction accuracy with increasing VO(2max) values was apparent. It was concluded that a non-exercise model based on BIA might be a rapid and useful technique to estimate VO(2max), when a direct test does not seem feasible. However, though the present results are useful to determine the viability of the method, further refinement of the BIA approach and its validation in a large, diverse population is needed before it can be applied to the clinical and epidemiological settings.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
NASA Astrophysics Data System (ADS)
Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.
2014-12-01
Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013), whereas the frictional coefficient estimated from the numerical simulation was about 0.27. This discrepancy may be due to the digital elevation model, to the other forces such as pressure gradients and centrifugal acceleration included in the model. However, quantitative interpretation of this difference requires further investigation.
Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.
Giedt, Joel; Thomas, Anthony W; Young, Ross D
2009-11-13
Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.
Inverse Analysis of Irradiated NuclearMaterial Gamma Spectra via Nonlinear Optimization
NASA Astrophysics Data System (ADS)
Dean, Garrett James
Nuclear forensics is the collection of technical methods used to identify the provenance of nuclear material interdicted outside of regulatory control. Techniques employed in nuclear forensics include optical microscopy, gas chromatography, mass spectrometry, and alpha, beta, and gamma spectrometry. This dissertation focuses on the application of inverse analysis to gamma spectroscopy to estimate the history of pulse irradiated nuclear material. Previous work in this area has (1) utilized destructive analysis techniques to supplement the nondestructive gamma measurements, and (2) been applied to samples composed of spent nuclear fuel with long irradiation and cooling times. Previous analyses have employed local nonlinear solvers, simple empirical models of gamma spectral features, and simple detector models of gamma spectral features. The algorithm described in this dissertation uses a forward model of the irradiation and measurement process within a global nonlinear optimizer to estimate the unknown irradiation history of pulse irradiated nuclear material. The forward model includes a detector response function for photopeaks only. The algorithm uses a novel hybrid global and local search algorithm to quickly estimate the irradiation parameters, including neutron fluence, cooling time and original composition. Sequential, time correlated series of measurements are used to reduce the uncertainty in the estimated irradiation parameters. This algorithm allows for in situ measurements of interdicted irradiated material. The increase in analysis speed comes with a decrease in information that can be determined, but the sample fluence, cooling time, and composition can be determined within minutes of a measurement. Furthermore, pulse irradiated nuclear material has a characteristic feature that irradiation time and flux cannot be independently estimated. The algorithm has been tested against pulse irradiated samples of pure special nuclear material with cooling times of four minutes to seven hours. The algorithm described is capable of determining the cooling time and fluence the sample was exposed to within 10% as well as roughly estimating the relative concentrations of nuclides present in the original composition.
NASA Technical Reports Server (NTRS)
Smrekar, Suzanne E.; Kiefer, Walter S.; Stofan, Ellen R.
1997-01-01
Large volcanic rises on Venus have been interpreted as hotspots, or the surface manifestation of mantle upwelling, on the basis of their broad topographic rises, abundant volcanism, and large positive gravity anomalies. Hotspots offer an important opportunity to study the behavior of the lithosphere in response to mantle forces. In addition to the four previously known hotspots, Atla, Bell, Beta, and western Eistla Regiones, five new probable hotspots, Dione, central Eistla, eastern Eistla, Imdr, and Themis, have been identified in the Magellan radar, gravity and topography data. These nine regions exhibit a wider range of volcano-tectonic characteristics than previously recognized for venusian hotspots, and have been classified as rift-dominated (Atla, Beta), coronae-dominated (central and eastern Eistla, Themis), or volcano-dominated (Bell, Dione, western Eistla, Imdr). The apparent depths of compensation for these regions ranges from 65 to 260 km. New estimates of the elastic thickness, using the 90 deg and order spherical harmonic field, are 15-40 km at Bell Regio, and 25 km at western Eistla Regio. Phillips et al. find a value of 30 km at Atla Regio. Numerous models of lithospheric and mantle behavior have been proposed to interpret the gravity and topography signature of the hotspots, with most studies focusing on Atla or Beta Regiones. Convective models with Earth-like parameters result in estimates of the thickness of the thermal lithosphere of approximately 100 km. Models of stagnant lid convection or thermal thinning infer the thickness of the thermal lithosphere to be 300 km or more. Without additional constraints, any of the model fits are equally valid. The thinner thermal lithosphere estimates are most consistent with the volcanic and tectonic characteristics of the hotspots. Estimates of the thermal gradient based on estimates of the elastic thickness also support a relatively thin lithosphere (Phillips et al.). The advantage of larger estimates of the thermal lithospheric thickness is that they provide an explanation for the apparently modest levels of geologic activity on Venus over the last half billion years.
NASA Astrophysics Data System (ADS)
Potter, Christopher; Brooks Genovese, Vanessa; Klooster, Steven; Bobo, Matthew; Torregrosa, Alicia
To produce a new daily record of gross carbon emissions from biomass burning events and post-burning decomposition fluxes in the states of the Brazilian Legal Amazon (Instituto Brasileiro de Geografia e Estatistica (IBGE), 1991. Anuario Estatistico do Brasil, Vol. 51. Rio de Janeiro, Brazil pp. 1-1024). We have used vegetation greenness estimates from satellite images as inputs to a terrestrial ecosystem production model. This carbon allocation model generates new estimates of regional aboveground vegetation biomass at 8-km resolution. The modeled biomass product is then combined for the first time with fire pixel counts from the advanced very high-resolution radiometer (AVHRR) to overlay regional burning activities in the Amazon. Results from our analysis indicate that carbon emission estimates from annual region-wide sources of deforestation and biomass burning in the early 1990s are apparently three to five times higher than reported in previous studies for the Brazilian Legal Amazon (Houghton et al., 2000. Nature 403, 301-304; Fearnside, 1997. Climatic Change 35, 321-360), i.e., studies which implied that the Legal Amazon region tends toward a net-zero annual source of terrestrial carbon. In contrast, our analysis implies that the total source fluxes over the entire Legal Amazon region range from 0.2 to 1.2 Pg C yr -1, depending strongly on annual rainfall patterns. The reasons for our higher burning emission estimates are (1) use of combustion fractions typically measured during Amazon forest burning events for computing carbon losses, (2) more detailed geographic distribution of vegetation biomass and daily fire activity for the region, and (3) inclusion of fire effects in extensive areas of the Legal Amazon covered by open woodland, secondary forests, savanna, and pasture vegetation. The total area of rainforest estimated annually to be deforested did not differ substantially among the previous analyses cited and our own.
NASA Astrophysics Data System (ADS)
von Paris, P.; Gratier, P.; Bordé, P.; Selsis, F.
2016-03-01
Context. Basic atmospheric properties, such as albedo and heat redistribution between day- and nightsides, have been inferred for a number of planets using observations of secondary eclipses and thermal phase curves. Optical phase curves have not yet been used to constrain these atmospheric properties consistently. Aims: We model previously published phase curves of CoRoT-1b, TrES-2b, and HAT-P-7b, and infer albedos and recirculation efficiencies. These are then compared to previous estimates based on secondary eclipse data. Methods: We use a physically consistent model to construct optical phase curves. This model takes Lambertian reflection, thermal emission, ellipsoidal variations, and Doppler boosting, into account. Results: CoRoT-1b shows a non-negligible scattering albedo (0.11 < AS < 0.3 at 95% confidence) as well as small day-night temperature contrasts, which are indicative of moderate to high re-distribution of energy between dayside and nightside. These values are contrary to previous secondary eclipse and phase curve analyses. In the case of HAT-P-7b, model results suggest a relatively high scattering albedo (AS ≈ 0.3). This confirms previous phase curve analysis; however, it is in slight contradiction to values inferred from secondary eclipse data. For TrES-2b, both approaches yield very similar estimates of albedo and heat recirculation. Discrepancies between recirculation and albedo values as inferred from secondary eclipse and optical phase curve analyses might be interpreted as a hint that optical and IR observations probe different atmospheric layers, hence temperatures.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Factors influencing reporting and harvest probabilities in North American geese
Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F.; White, Gary C.; Caswell, D.F.
2009-01-01
We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.
Jeton, Anne E.; Maurer, Douglas K.
2007-01-01
Recent estimates of ground-water inflow to the basin-fill aquifers of Carson Valley, Nevada, and California, from the adjacent Carson Range and Pine Nut Mountains ranged from 22,000 to 40,000 acre-feet per year using water-yield and chloride-balance methods. In this study, watershed models were developed for watersheds with perennial streams and for watersheds with ephemeral streams in the Carson Range and Pine Nut Mountains to provide an independent estimate of ground-water inflow. This report documents the development and calibration of the watershed models, presents model results, compares the results with recent estimates of ground-water inflow to the basin-fill aquifers of Carson Valley, and presents updated estimates of the ground-water budget for basin-fill aquifers of Carson Valley. The model used for the study was the Precipitation-Runoff Modeling System, a physically based, distributed-parameter model designed to simulate precipitation and snowmelt runoff as well as snowpack accumulation and snowmelt processes. Geographic Information System software was used to manage spatial data, characterize model drainages, and to develop Hydrologic Response Units. Models were developed for * Two watersheds with gaged perennial streams in the Carson Range and two watersheds with gaged perennial streams in the Pine Nut Mountains using measured daily mean runoff, * Ten watersheds with ungaged perennial streams using estimated daily mean runoff, * Ten watershed with ungaged ephemeral streams in the Carson Range, and * A large area of ephemeral runoff near the Pine Nut Mountains. Models developed for the gaged watersheds were used as index models to guide the calibration of models for ungaged watersheds. Model calibration was constrained by daily mean runoff for 4 gaged watersheds and for 10 ungaged watersheds in the Carson Range estimated in a previous study. The models were further constrained by annual precipitation volumes estimated in a previous study to provide estimates of ground-water inflow using similar water input. The calibration periods were water years 1990-2002 for watersheds in the Carson Range, and water years 1981-97 for watersheds in the Pine Nut Mountains. Daily mean values for water years 1990-2002 were then simulated using the calibrated watershed models in the Pine Nut Mountains. The daily mean values of precipitation, runoff, evapotranspiration, and ground-water inflow simulated from the watershed models were summed to provide annual mean rates and volumes for each year of the simulations, and mean annual rates and volumes computed for water years 1990-2002. Mean annual bias for the period of record for models of Daggett Creek and Fredericksburg Canyon watersheds, two gaged perennial watersheds in the Carson Range, was within 4 percent and relative errors were about 6 and 12 percent, respectively. Model fit was not as satisfactory for two gaged perennial watersheds, Pine Nut and Buckeye Creeks, in the Pine Nut Mountains. The Pine Nut Creek watershed model had a large negative mean annual bias and a relative error of -11 percent, underestimated runoff for all years but the wet years in the latter part of the record, but adequately simulated the bulk of the spring runoff most of the years. The Buckeye Creek watershed model overestimated mean annual runoff with a relative error of about -5 percent when water year 1994 was removed from the analysis because it had a poor record. The bias and error of the calibrated models were within generally accepted limits for watershed models, indicating the simulated rates and volumes of runoff and ground-water inflow were reasonable. The total mean annual ground-water inflow to Carson Valley computed using estimates simulated by the watershed models was 38,000 acre-feet, including ground-water inflow from Eagle Valley, recharge from precipitation on eolian sand and gravel deposits, and ground-water recharge from precipitation on the western alluvial fans. The estimate was in close agreement with that obtained from the chloride-balance method, 40,000 acre-feet, but was considerably greater than the estimate obtained from the water-yield method, 22,000 acre-feet. The similar estimates obtained from the watershed models and chloride-balance method, two relatively independent methods, provide more confidence that they represent a reasonably accurate volume of ground-water inflow to Carson Valley. However, the two estimates are not completely independent because they use similar distributions of mean annual precipitation. Annual ground-water recharge of the basin-fill aquifers in Carson Valley ranged from 51,000 to 54,000 acre-feet computed using estimates of ground-water inflow to Carson Valley simulated from the watershed models combined with previous estimates of other ground-water budget components. Estimates of mean annual ground-water discharge range from 44,000 to 47,000 acre-feet. The low range estimate for ground-water recharge, 51,000 acre-feet per year, is most similar to the high range estimate for ground-water discharge, 47,000 acre-feet per year. Thus, an average annual volume of about 50,000 acre-feet is a reasonable estimate for mean annual ground-water recharge to and discharge from the basin-fill aquifers in Carson Valley. The results of watershed models indicate that significant interannual variability in the volumes of ground-water inflow is caused by climate variations. During multi-year drought conditions, the watershed simulations indicate that ground-water recharge could be as much as 80 percent less than the mean annual volume of 50,000 acre-feet.
Actis, Jason A; Honegger, Jasmin D; Gates, Deanna H; Petrella, Anthony J; Nolasco, Luis A; Silverman, Anne K
2018-02-08
Low back mechanics are important to quantify to study injury, pain and disability. As in vivo forces are difficult to measure directly, modeling approaches are commonly used to estimate these forces. Validation of model estimates is critical to gain confidence in modeling results across populations of interest, such as people with lower-limb amputation. Motion capture, ground reaction force and electromyographic data were collected from ten participants without an amputation (five male/five female) and five participants with a unilateral transtibial amputation (four male/one female) during trunk-pelvis range of motion trials in flexion/extension, lateral bending and axial rotation. A musculoskeletal model with a detailed lumbar spine and the legs including 294 muscles was used to predict L4-L5 loading and muscle activations using static optimization. Model estimates of L4-L5 intervertebral joint loading were compared to measured intradiscal pressures from the literature and muscle activations were compared to electromyographic signals. Model loading estimates were only significantly different from experimental measurements during trunk extension for males without an amputation and for people with an amputation, which may suggest a greater portion of L4-L5 axial load transfer through the facet joints, as facet loads are not captured by intradiscal pressure transducers. Pressure estimates between the model and previous work were not significantly different for flexion, lateral bending or axial rotation. Timing of model-estimated muscle activations compared well with electromyographic activity of the lumbar paraspinals and upper erector spinae. Validated estimates of low back loading can increase the applicability of musculoskeletal models to clinical diagnosis and treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.
M-estimator for the 3D symmetric Helmert coordinate transformation
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2018-01-01
The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.
Estimating the volume of glaciers in the Himalayan-Karakoram region using different methods
NASA Astrophysics Data System (ADS)
Frey, H.; Machguth, H.; Huss, M.; Huggel, C.; Bajracharya, S.; Bolch, T.; Kulkarni, A.; Linsbauer, A.; Salzmann, N.; Stoffel, M.
2014-12-01
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km2. An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km3, depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.
Tan, Sarah; Makela, Susanna; Heller, Daliah; Konty, Kevin; Balter, Sharon; Zheng, Tian; Stark, James H
2018-06-01
Existing methods to estimate the prevalence of chronic hepatitis C (HCV) in New York City (NYC) are limited in scope and fail to assess hard-to-reach subpopulations with highest risk such as injecting drug users (IDUs). To address these limitations, we employ a Bayesian multi-parameter evidence synthesis model to systematically combine multiple sources of data, account for bias in certain data sources, and provide unbiased HCV prevalence estimates with associated uncertainty. Our approach improves on previous estimates by explicitly accounting for injecting drug use and including data from high-risk subpopulations such as the incarcerated, and is more inclusive, utilizing ten NYC data sources. In addition, we derive two new equations to allow age at first injecting drug use data for former and current IDUs to be incorporated into the Bayesian evidence synthesis, a first for this type of model. Our estimated overall HCV prevalence as of 2012 among NYC adults aged 20-59 years is 2.78% (95% CI 2.61-2.94%), which represents between 124,900 and 140,000 chronic HCV cases. These estimates suggest that HCV prevalence in NYC is higher than previously indicated from household surveys (2.2%) and the surveillance system (2.37%), and that HCV transmission is increasing among young injecting adults in NYC. An ancillary benefit from our results is an estimate of current IDUs aged 20-59 in NYC: 0.58% or 27,600 individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
Tree stability under wind: simulating uprooting with root breakage using a finite element method.
Yang, Ming; Défossez, Pauline; Danjon, Frédéric; Fourcaud, Thierry
2014-09-01
Windstorms are the major natural hazard affecting European forests, causing tree damage and timber losses. Modelling tree anchorage mechanisms has progressed with advances in plant architectural modelling, but it is still limited in terms of estimation of anchorage strength. This paper aims to provide a new model for root anchorage, including the successive breakage of roots during uprooting. The model was based on the finite element method. The breakage of individual roots was taken into account using a failure law derived from previous work carried out on fibre metal laminates. Soil mechanical plasticity was considered using the Mohr-Coulomb failure criterion. The mechanical model for roots was implemented in the numerical code ABAQUS using beam elements embedded in a soil block meshed with 3-D solid elements. The model was tested by simulating tree-pulling experiments previously carried out on a tree of Pinus pinaster (maritime pine). Soil mechanical parameters were obtained from laboratory tests. Root system architecture was digitized and imported into ABAQUS while root material properties were estimated from the literature. Numerical simulations of tree-pulling tests exhibited realistic successive root breakages during uprooting, which could be seen in the resulting response curves. Broken roots could be visually located within the root system at any stage of the simulations. The model allowed estimation of anchorage strength in terms of the critical turning moment and accumulated energy, which were in good agreement with in situ measurements. This study provides the first model of tree anchorage strength for P. pinaster derived from the mechanical strength of individual roots. The generic nature of the model permits its further application to other tree species and soil conditions.
Tree stability under wind: simulating uprooting with root breakage using a finite element method
Yang, Ming; Défossez, Pauline; Danjon, Frédéric; Fourcaud, Thierry
2014-01-01
Background and Aims Windstorms are the major natural hazard affecting European forests, causing tree damage and timber losses. Modelling tree anchorage mechanisms has progressed with advances in plant architectural modelling, but it is still limited in terms of estimation of anchorage strength. This paper aims to provide a new model for root anchorage, including the successive breakage of roots during uprooting. Methods The model was based on the finite element method. The breakage of individual roots was taken into account using a failure law derived from previous work carried out on fibre metal laminates. Soil mechanical plasticity was considered using the Mohr–Coulomb failure criterion. The mechanical model for roots was implemented in the numerical code ABAQUS using beam elements embedded in a soil block meshed with 3-D solid elements. The model was tested by simulating tree-pulling experiments previously carried out on a tree of Pinus pinaster (maritime pine). Soil mechanical parameters were obtained from laboratory tests. Root system architecture was digitized and imported into ABAQUS while root material properties were estimated from the literature. Key Results Numerical simulations of tree-pulling tests exhibited realistic successive root breakages during uprooting, which could be seen in the resulting response curves. Broken roots could be visually located within the root system at any stage of the simulations. The model allowed estimation of anchorage strength in terms of the critical turning moment and accumulated energy, which were in good agreement with in situ measurements. Conclusions This study provides the first model of tree anchorage strength for P. pinaster derived from the mechanical strength of individual roots. The generic nature of the model permits its further application to other tree species and soil conditions. PMID:25006178
Huang, Lei; Liao, Li; Wu, Cathy H.
2016-01-01
Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273
Hirata, K.; Geist, E.; Satake, K.; Tanioka, Y.; Yamaki, S.
2003-01-01
We inverted 13 tsunami waveforms recorded in Japan to estimate the slip distribution of the 1952 Tokachi-Oki earthquake (M 8.1), which occurred southeast off Hokkaido along the southern Kuril subduction zone. The previously estimated source area determined from tsunami travel times [Hatori, 1973] did not coincide with the observed aftershock distribution. Our results show that a large amount of slip occurred in the aftershock area east of Hatori's tsunami source area, suggesting that a portion of the interplate thrust near the trench was ruptured by the main shock. We also found more than 5 m of slip along the deeper part of the seismogenic interface, just below the central part of Hatori's tsunami source area. This region, which also has the largest stress drop during the main shock, had few aftershocks. Large tsunami heights on the eastern Hokkaido coast are better explained by the heterogeneous slip model than previous uniform-slip fault models. The total seismic moment is estimated to be 1.87 ?? 1021 N m, giving a moment magnitude of Mw = 8.1. The revised tsunami source area is estimated to be 25.2 ?? 103 km2, ???3 times larger than the previous tsunami source area. Out of four large earthquakes with M ??? 7 that subsequently occurred in and around the rupture area of the 1952 event, three were at the edges of regions with relatively small amount of slip. We also found that a subducted seamount near the edge of the rupture area possibly impeded slip along the plate interface.
NASA Astrophysics Data System (ADS)
Loppini, A.; Pedersen, M. G.; Braun, M.; Filippi, S.
2017-09-01
The importance of gap-junction coupling between β cells in pancreatic islets is well established in mouse. Such ultrastructural connections synchronize cellular activity, confine biological heterogeneity, and enhance insulin pulsatility. Dysfunction of coupling has been associated with diabetes and altered β -cell function. However, the role of gap junctions between human β cells is still largely unexplored. By using patch-clamp recordings of β cells from human donors, we previously estimated electrical properties of these channels by mathematical modeling of pairs of human β cells. In this work we revise our estimate by modeling triplet configurations and larger heterogeneous clusters. We find that a coupling conductance in the range 0.005 -0.020 nS/pF can reproduce experiments in almost all the simulated arrangements. We finally explore the consequence of gap-junction coupling of this magnitude between β cells with mutant variants of the ATP-sensitive potassium channels involved in some metabolic disorders and diabetic conditions, translating studies performed on rodents to the human case. Our results are finally discussed from the perspective of therapeutic strategies. In summary, modeling of more realistic clusters with more than two β cells slightly lowers our previous estimate of gap-junction conductance and gives rise to patterns that more closely resemble experimental traces.
Estimation of body temperature rhythm based on heart activity parameters in daily life.
Sooyoung Sim; Heenam Yoon; Hosuk Ryou; Kwangsuk Park
2014-01-01
Body temperature contains valuable health related information such as circadian rhythm and menstruation cycle. Also, it was discovered from previous studies that body temperature rhythm in daily life is related with sleep disorders and cognitive performances. However, monitoring body temperature with existing devices during daily life is not easy because they are invasive, intrusive, or expensive. Therefore, the technology which can accurately and nonintrusively monitor body temperature is required. In this study, we developed body temperature estimation model based on heart rate and heart rate variability parameters. Although this work was inspired by previous research, we originally identified that the model can be applied to body temperature monitoring in daily life. Also, we could find out that normalized Mean heart rate (nMHR) and frequency domain parameters of heart rate variability showed better performance than other parameters. Although we should validate the model with more number of subjects and consider additional algorithms to decrease the accumulated estimation error, we could verify the usefulness of this approach. Through this study, we expect that we would be able to monitor core body temperature and circadian rhythm from simple heart rate monitor. Then, we can obtain various health related information derived from daily body temperature rhythm.
Manyema, Mercy; Veerman, Lennert J; Tugendhaft, Aviva; Labadarios, Demetre; Hofman, Karen J
2016-05-31
Stroke poses a growing human and economic burden in South Africa. Excess sugar consumption, especially from sugar-sweetened beverages (SSBs), has been associated with increased obesity and stroke risk. Research shows that price increases for SSBs can influence consumption and modelling evidence suggests that taxing SSBs has the potential to reduce obesity and related diseases. This study estimates the potential impact of an SSB tax on stroke-related mortality, costs and health-adjusted life years in South Africa. A proportional multi-state life table-based model was constructed in Microsoft Excel (2010). We used consumption data from the 2012 South African National Health and Nutrition Examination Survey, previously published own and cross price elasticities of SSBs and energy balance equations to estimate changes in daily energy intake and BMI arising from increased SSB prices. Stroke relative risk, and prevalent years lived with disability estimates from the Global Burden of Disease Study and modelled disease epidemiology estimates from a previous study, were used to estimate the effect of the BMI changes on the burden of stroke. Our model predicts that an SSB tax may avert approximately 72 000 deaths, 550 000 stroke-related health-adjusted life years and over ZAR5 billion, (USD400 million) in health care costs over 20 years (USD296-576 million). Over 20 years, the number of incident stroke cases may be reduced by approximately 85 000 and prevalent cases by about 13 000. Fiscal policy has the potential, as part of a multi-faceted approach, to mitigate the growing burden of stroke in South Africa and contribute to the achievement of the target set by the Department of Health to reduce relative premature mortality (less than 60 years) from non-communicable diseases by the year 2020.
NASA Technical Reports Server (NTRS)
Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Al-Hamdan, Mohammad Z.; Crosson, William L.; Estes, Maurice G., Jr.; Estes, Sue M.; Quattrochi, Dale A.; Puttaswamy, Sweta Jinnagara;
2013-01-01
Previous studies showed that fine particulate matter (PM(sub 2.5), particles smaller than 2.5 micrometers in aerodynamic diameter) is associated with various health outcomes. Ground in situ measurements of PM(sub 2.5) concentrations are considered to be the gold standard, but are time-consuming and costly. Satellite-retrieved aerosol optical depth (AOD) products have the potential to supplement the ground monitoring networks to provide spatiotemporally-resolved PM(sub 2.5) exposure estimates. However, the coarse resolutions (e.g., 10 km) of the satellite AOD products used in previous studies make it very difficult to estimate urban-scale PM(sub 2.5) characteristics that are crucial to population-based PM(sub 2.5) health effects research. In this paper, a new aerosol product with 1 km spatial resolution derived by the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm was examined using a two-stage spatial statistical model with meteorological fields (e.g., wind speed) and land use parameters (e.g., forest cover, road length, elevation, and point emissions) as ancillary variables to estimate daily mean PM(sub 2.5) concentrations. The study area is the southeastern U.S., and data for 2003 were collected from various sources. A cross validation approach was implemented for model validation. We obtained R(sup 2) of 0.83, mean prediction error (MPE) of 1.89 micrograms/cu m, and square root of the mean squared prediction errors (RMSPE) of 2.73 micrograms/cu m in model fitting, and R(sup 2) of 0.67, MPE of 2.54 micrograms/cu m, and RMSPE of 3.88 micrograms/cu m in cross validation. Both model fitting and cross validation indicate a good fit between the dependent variable and predictor variables. The results showed that 1 km spatial resolution MAIAC AOD can be used to estimate PM(sub 2.5) concentrations.
Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark
2015-01-01
This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart. PMID:25909047
LATTE Linking Acoustic Tests and Tagging Using Statistical Estimation
2015-09-30
the complexity of the model: (from simplest to most complex) Kalman filter , Markov chain Monte-Carlo (MCMC) and ABC. Many of these methods have been...using SMMs fitted using Kalman filters . Therefore, using the DTAG data, we can estimate the distributions associated with 2D horizontal displacement...speed (a key problem in the previous Kalman filter implementation). This new approach also allows the animal’s horizontal movement direction to differ
2010-04-30
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this
Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution
2009-10-01
scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target
Prion Amplification and Hierarchical Bayesian Modeling Refine Detection of Prion Infection
NASA Astrophysics Data System (ADS)
Wyckoff, A. Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J.; Pulford, Bruce; Wild, Margaret; Antolin, Michael; Vercauteren, Kurt; Zabel, Mark
2015-02-01
Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.
Prion amplification and hierarchical Bayesian modeling refine detection of prion infection.
Wyckoff, A Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J; Pulford, Bruce; Wild, Margaret; Antolin, Michael; VerCauteren, Kurt; Zabel, Mark
2015-02-10
Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.
Modeling Perceptual Decision Processes
2014-09-17
Ratcliff, & Wagenmakers, in press). Previous research suggests that playing action video games improves performance on sensory, perceptual, and...estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster...third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the
USDA-ARS?s Scientific Manuscript database
Irrigation is a widely used water management practice that is often poorly parameterized in land surface and climate models. Previous studies have addressed this issue via use of irrigation area, applied water inventory data, or soil moisture content. These approaches have a variety of drawbacks i...
Wildfire risk and housing prices: a case study from Colorado Springs.
G.H. Donovan; P.A. Champ; D.T. Butry
2007-01-01
Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
Nonlinear Penalized Estimation of True Q-Matrix in Cognitive Diagnostic Models
ERIC Educational Resources Information Center
Xiang, Rui
2013-01-01
A key issue of cognitive diagnostic models (CDMs) is the correct identification of Q-matrix which indicates the relationship between attributes and test items. Previous CDMs typically assumed a known Q-matrix provided by domain experts such as those who developed the questions. However, misspecifications of Q-matrix had been discovered in the past…
Novel mathematical model to estimate ball impact force in soccer.
Iga, Takahito; Nunome, Hiroyuki; Sano, Shinya; Sato, Nahoko; Ikegami, Yasuo
2017-11-22
To assess ball impact force during soccer kicking is important to quantify from both performance and chronic injury prevention perspectives. We aimed to verify the appropriateness of previous models used to estimate ball impact force and to propose an improved model to better capture the time history of ball impact force. A soccer ball was fired directly onto a force platform (10 kHz) at five realistic kicking ball velocities and ball behaviour was captured by a high-speed camera (5,000 Hz). The time history of ball impact force was estimated using three existing models and two new models. A new mathematical model that took into account a rapid change in ball surface area and heterogeneous ball deformation showed a distinctive advantage to estimate the peak forces and its occurrence times and to reproduce time history of ball impact forces more precisely, thereby reinforcing the possible mechanics of 'footballer's ankle'. Ball impact time was also systematically shortened when ball velocity increases in contrast to practical understanding for producing faster ball velocity, however, the aspect of ball contact time must be considered carefully from practical point of view.
Crop weather models of barley and spring wheat yield for agrophysical units in North Dakota
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
Models based on multiple regression were developed to estimate barley yield and spring wheat yield from weather data for Agrophysical units(APU) in North Dakota. The predictor variables are derived from monthly average temperature and monthly total precipitation data at meteorological stations in the cooperative network. The models are similar in form to the previous models developed for Crop Reporting Districts (CRD). The trends and derived variables were the same and the approach to select the significant predictors was similar to that used in developing the CRD models. The APU models show sight improvements in some of the statistics of the models, e.g., explained variation. These models are to be independently evaluated and compared to the previously evaluated CRD models. The comparison will indicate the preferred model area for this application, i.e., APU or CRD.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma
2014-01-01
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.
Salomon, Joshua A
2003-01-01
Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
Comparison of prospective risk estimates for postoperative complications: human vs computer model.
Glasgow, Robert E; Hawn, Mary T; Hosokawa, Patrick W; Henderson, William G; Min, Sung-Joon; Richman, Joshua S; Tomeh, Majed G; Campbell, Darrell; Neumayer, Leigh A
2014-02-01
Surgical quality improvement tools such as NSQIP are limited in their ability to prospectively affect individual patient care by the retrospective audit and feedback nature of their design. We hypothesized that statistical models using patient preoperative characteristics could prospectively provide risk estimates of postoperative adverse events comparable to risk estimates provided by experienced surgeons, and could be useful for stratifying preoperative assessment of patient risk. This was a prospective observational cohort. Using previously developed models for 30-day postoperative mortality, overall morbidity, cardiac, thromboembolic, pulmonary, renal, and surgical site infection (SSI) complications, model and surgeon estimates of risk were compared with each other and with actual 30-day outcomes. The study cohort included 1,791 general surgery patients operated on between June 2010 and January 2012. Observed outcomes were mortality (0.2%), overall morbidity (8.2%), and pulmonary (1.3%), cardiac (0.3%), thromboembolism (0.2%), renal (0.4%), and SSI (3.8%) complications. Model and surgeon risk estimates showed significant correlation (p < 0.0001) for each outcome category. When surgeons perceived patient risk for overall morbidity to be low, the model-predicted risk and observed morbidity rates were 2.8% and 4.1%, respectively, compared with 10% and 18% in perceived high risk patients. Patients in the highest quartile of model-predicted risk accounted for 75% of observed mortality and 52% of morbidity. Across a broad range of general surgical operations, we confirmed that the model risk estimates are in fairly good agreement with risk estimates of experienced surgeons. Using these models prospectively can identify patients at high risk for morbidity and mortality, who could then be targeted for intervention to reduce postoperative complications. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Vanlunteren, A.
1977-01-01
A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
Multiple Component Event-Related Potential (mcERP) Estimation
NASA Technical Reports Server (NTRS)
Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)
2002-01-01
We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.
Gong, Ping; Nan, Xiaofei; Barker, Natalie D; Boyd, Robert E; Chen, Yixin; Wilkins, Dawn E; Johnson, David R; Suedel, Burton C; Perkins, Edward J
2016-03-08
Chemical bioavailability is an important dose metric in environmental risk assessment. Although many approaches have been used to evaluate bioavailability, not a single approach is free from limitations. Previously, we developed a new genomics-based approach that integrated microarray technology and regression modeling for predicting bioavailability (tissue residue) of explosives compounds in exposed earthworms. In the present study, we further compared 18 different regression models and performed variable selection simultaneously with parameter estimation. This refined approach was applied to both previously collected and newly acquired earthworm microarray gene expression datasets for three explosive compounds. Our results demonstrate that a prediction accuracy of R(2) = 0.71-0.82 was achievable at a relatively low model complexity with as few as 3-10 predictor genes per model. These results are much more encouraging than our previous ones. This study has demonstrated that our approach is promising for bioavailability measurement, which warrants further studies of mixed contamination scenarios in field settings.
Latitudinal distributions of particulate carbon export across the North Western Atlantic Ocean
NASA Astrophysics Data System (ADS)
Puigcorbé, Viena; Roca-Martí, Montserrat; Masqué, Pere; Benitez-Nelson, Claudia; Rutgers van der Loeff, Michiel; Bracher, Astrid; Moreau, Sebastien
2017-11-01
234Th-derived carbon export fluxes were measured in the Atlantic Ocean under the GEOTRACES framework to evaluate basin-scale export variability. Here, we present the results from the northern half of the GA02 transect, spanning from the equator to 64°N. As a result of limited site-specific C/234Th ratio measurements, we further combined our data with previous work to develop a basin wide C/234Th ratio depth curve. While the magnitude of organic carbon fluxes varied depending on the C/234Th ratio used, latitudinal trends were similar, with sizeable and variable organic carbon export fluxes occurring at high latitudes and low to negligible fluxes occurring in oligotrophic waters. Our results agree with previous studies, except at the boundaries between domains, where fluxes were relatively enhanced. Three different models were used to obtain satellite-derived net primary production (NPP). In general, NPP estimates had similar trends along the transect, but there were significant differences in the absolute magnitude depending on the model used. Nevertheless, organic carbon export efficiencies were generally < 25%, with the exception of a few stations located in the transition area between the riverine and the oligotrophic domains and between the oligotrophic and the temperate domains. Satellite-derived organic carbon export models from Dunne et al. (2005) (D05), Laws et al. (2011) (L11) and Henson et al. (2011) (H11) were also compared to our 234Th-derived carbon exports fluxes. D05 and L11 provided estimates closest to values obtained with the 234Th approach (within a 3-fold difference), but with no clear trends. The H11 model, on the other hand, consistently provided lower export estimates. The large increase in export data in the Atlantic Ocean derived from the GEOTRACES Program, combined with satellite observations and modeling efforts continue to improve the estimates of carbon export in this ocean basin and therefore reduce uncertainty in the global carbon budget. However, our results also suggest that tuning export models and including biological parameters at a regional scale is necessary for improving satellite-modeling efforts and providing export estimates that are more representative of in situ observations.
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.
2016-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Evaluation of the Williams-type spring wheat model in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
The Williams type model, developed similarly to previous models of C.V.D. Williams, uses monthly temperature and precipitation data as well as soil and topological variables to predict the yield of the spring wheat crop. The models are statistically developed using the regression technique. Eight model characteristics are examined in the evaluation of the model. Evaluation is at the crop reporting district level, the state level and for the entire region. A ten year bootstrap test was the basis of the statistical evaluation. The accuracy and current indication of modeled yield reliability could show improvement. There is great variability in the bias measured over the districts, but there is a slight overall positive bias. The model estimates for the east central crop reporting district in Minnesota are not accurate. The estimate of yield for 1974 were inaccurate for all of the models.
Carnegie, Nicole Bohme
2011-04-15
The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.
Building an ACT-R Reader for Eye-Tracking Corpus Data.
Dotlačil, Jakub
2018-01-01
Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works (most prominently, Engelmann, Vasishth, Engbert & Kliegl, ), the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo (MCMC) techniques, rather than by relying on the mix of manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. Copyright © 2017 Cognitive Science Society, Inc.
Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittman, Richard S.
2013-09-20
This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.
Household water use and conservation models using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.
2013-10-01
The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.
O'Brien, Jake William; Banks, Andrew Phillip William; Novic, Andrew Joseph; Mueller, Jochen F; Jiang, Guangming; Ort, Christoph; Eaglesham, Geoff; Yuan, Zhiguo; Thai, Phong K
2017-04-04
A key uncertainty of wastewater-based epidemiology is the size of the population which contributed to a given wastewater sample. We previously developed and validated a Bayesian inference model to estimate population size based on 14 population markers which: (1) are easily measured and (2) have mass loads which correlate with population size. However, the potential uncertainty of the model prediction due to in-sewer degradation of these markers was not evaluated. In this study, we addressed this gap by testing their stability under sewer conditions and assessed whether degradation impacts the model estimates. Five markers, which formed the core of our model, were stable in the sewers while the others were not. Our evaluation showed that the presence of unstable population markers in the model did not decrease the precision of the population estimates providing that stable markers such as acesulfame remained in the model. However, to achieve the minimum uncertainty in population estimates, we propose that the core markers to be included in population models for other sites should meet two additional criteria: (3) negligible degradation in wastewater to ensure the stability of chemicals during collection; and (4) < 10% in-sewer degradation could occur during the mean residence time of the sewer network.
Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly
MacKenzie, D.I.; Nichols, J.D.; Hines, J.E.; Knutson, M.G.; Franklin, A.B.
2003-01-01
Few species are likely to be so evident that they will always be detected when present. Failing to allow for the possibility that a target species was present, but undetected, at a site will lead to biased estimates of site occupancy, colonization, and local extinction probabilities. These population vital rates are often of interest in long-term monitoring programs and metapopulation studies. We present a model that enables direct estimation of these parameters when the probability of detecting the species is less than 1. The model does not require any assumptions of process stationarity, as do some previous methods, but does require detection/nondetection data to be collected in a manner similar to Pollock's robust design as used in mark?recapture studies. Via simulation, we show that the model provides good estimates of parameters for most scenarios considered. We illustrate the method with data from monitoring programs of Northern Spotted Owls (Strix occidentalis caurina) in northern California and tiger salamanders (Ambystoma tigrinum) in Minnesota, USA.
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly
MacKenzie, D.I.; Nichols, J.D.; Hines, J.E.; Knutson, M.G.; Franklin, A.B.
2003-01-01
Few species are likely to be so evident that they will always be defected when present: Failing to allow for the possibility that a target species was present, but undetected at a site will lead to biased estimates of site occupancy, colonization,and local extinction probabilities. These population vital rates are often of interest in long-term monitoring programs and metapopulation studies. We present a model that enables direct estimation of these parameters when the probability of detecting the species is less than 1. The model does not require any assumptions-of process stationarity, as do some previous methods, but does require detection/nondetection data to be collected in a-manner similar to. Pollock's robust design as used-in mark-recapture studies. Via simulation, we,show that the model provides good estimates of parameters for most scenarios considered. We illustrate the method with data from monitoring programs of Northern Spotted Owls (Strix occidentalis caurina) in northern California and tiger salamanders (Ambystoma tigrinum) in Minnesota, USA.
Estimates of present and future flood risk in the conterminous United States
NASA Astrophysics Data System (ADS)
Wing, Oliver E. J.; Bates, Paul D.; Smith, Andrew M.; Sampson, Christopher C.; Johnson, Kris A.; Fargione, Joseph; Morefield, Philip
2018-03-01
Past attempts to estimate rainfall-driven flood risk across the US either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30 m resolution model of the entire conterminous US with a 2D representation of flood physics to produce estimates of flood hazard, which match to within 90% accuracy the skill of local models built with detailed data. These flood depths are combined with exposure datasets of commensurate resolution to calculate current and future flood risk. Our data show that the total US population exposed to serious flooding is 2.6-3.1 times higher than previous estimates, and that nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps). We find that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by climate change.
The rate of bubble growth in a superheated liquid in pool boiling
NASA Astrophysics Data System (ADS)
Abdollahi, Mohammad Reza; Jafarian, Mehdi; Jamialahmadi, Mohammad
2017-12-01
A semi-empirical model for the estimation of the rate of bubble growth in nucleate pool boiling is presented, considering a new equation to estimate the temperature history of the bubble in the bulk of liquid. The conservation equations of energy, mass and momentum have been firstly derived and solved analytically. The present analytical model of the bubble growth predicts that the radius of the bubble grows as a function of √{t}.{\\operatorname{erf}}( N√{t}) , while so far the bubble growth rate has been mainly correlated to √{t} in the previous studies. In the next step, the analytical solutions were used to develop a new semi-empirical equation. To achieve this, firstly the analytical solution were non-dimensionalised and then the experimental data, available in the literature, were applied to tune the dimensionless coefficients appeared in the dimensionless equation. Finally, the reliability of the proposed semi-empirical model was assessed through comparison of the model predictions with the available experimental data in the literature, which were not applied in the tuning of the dimensionless parameters of the model. The comparison of the model predictions with other proposed models in the literature was also performed. These comparisons show that this model enables more accurate predictions than previously proposed models with a deviation of less than 10% in a wide range of operating conditions.
NASA Astrophysics Data System (ADS)
Schurer, Andrew; Hegerl, Gabriele
2016-04-01
The evaluation of the transient climate response (TCR) is of critical importance to policy makers as it can be used to calculate a simple estimate of the expected warming given predicted greenhouse gas emissions. Previous studies using optimal detection techniques have been able to estimate a TCR value from the historic record using simulations from some of the models which took part in the Coupled Model Intercomparison Project Phase 5 (CMIP5) but have found that others give unconstrained results. At least partly this is due to degeneracy between the greenhouse gas and aerosol signals which makes separation of the temperature response to these forcings problematic. Here we re-visit this important topic by using an adapted optimal detection analysis within a Bayesian framework. We account for observational uncertainty by the use of an ensemble of instrumental observations, and model uncertainty by combining the results from several different models. This framework allows the use of prior information which is found to help separate the response to the different forcings leading to a more constrained estimate of TCR.
Ice Age Sea Level Change on a Dynamic Earth
NASA Astrophysics Data System (ADS)
Austermann, J.; Mitrovica, J. X.; Latychev, K.; Rovere, A.; Moucha, R.
2014-12-01
Changes in global mean sea level (GMSL) are a sensitive indicator of climate variability during the current ice age. Reconstructions are largely based on local sea level records, and the mapping to GMSL is computed from simulations of glacial isostatic adjustment (GIA) on 1-D Earth models. We argue, using two case studies, that resolving important, outstanding issues in ice age paleoclimate requires a more sophisticated consideration of mantle structure and dynamics. First, we consider the coral record from Barbados, which is widely used to constrain global ice volume changes since the Last Glacial Maximum (LGM, ~21 ka). Analyses of the record using 1-D viscoelastic Earth models have estimated a GMSL change since LGM of ~120 m, a value at odds with analyses of other far field records, which range from 130-135 m. We revisit the Barbados case using a GIA model that includes laterally varying Earth structure (Austermann et al., Nature Geo., 2013) and demonstrate that neglecting this structure, in particular the high-viscosity slab in the mantle linked to the subduction of the South American plate, has biased (low) previous estimates of GMSL change since LGM by ~10 m. Our analysis brings the Barbados estimate into accord with studies from other far-field sites. Second, we revisit estimates of GMSL during the mid-Pliocene warm period (MPWP, ~3 Ma), which was characterized by temperatures 2-3°C higher than present. The ice volume deficit during this period is a source of contention, with estimates ranging from 0-40 m GMSL equivalent. We argue that refining estimates of ice volume during MPWP requires a correction for mantle flow induced dynamic topography (DT; Rowley et al., Science, 2013), a signal neglected in previous studies of ice age sea level change. We present estimates of GIA- and DT-corrected elevations of MPWP shorelines from the U.S. east coast, Australia and South Africa in an attempt to reconcile these records with a single GMSL value.
Kowalski, Amanda
2015-01-01
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117
Assessing Forest NPP: BIOME-BGC Predictions versus BEF Derived Estimates
NASA Astrophysics Data System (ADS)
Hasenauer, H.; Pietsch, S. A.; Petritsch, R.
2007-05-01
Forest productivity has always been a major issue within sustainable forest management. While in the past terrestrial forest inventory data have been the major source for assessing forest productivity, recent developments in ecosystem modeling offer an alternative approach using ecosystem models such as Biome-BGC to estimate Net Primary Production (NPP). In this study we compare two terrestrial driven approaches for assessing NPP: (i) estimates from a species specific adaptation of the biogeochemical ecosystem model BIOME-BGC calibrated for Alpine conditions; and (ii) NPP estimates derived from inventory data using biomass expansion factors (BEF). The forest inventory data come from 624 sample plots across Austria and consist of repeated individual tree observations and include growth as well as soil and humus information. These locations are covered with spruce, beech, oak, pine and larch stands, thus addressing the main Austrian forest types. 144 locations were previously used in a validating effort to produce species-specific parameter estimates of the ecosystem model. The remaining 480 sites are from the Austrian National Forest Soil Survey carried out at the Federal Research and Training Centre for Forests, Natural Hazards and Landscape (BFW). By using diameter at breast height (dbh) and height (h) volume and subsequently biomass of individual trees were calculated, aggregated for the whole forest stand and compared with the model output. Regression analyses were performed for both volume and biomass estimates.
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
Application of latent variable model in Rosenberg self-esteem scale.
Leung, Shing-On; Wu, Hui-Ping
2013-01-01
Latent Variable Models (LVM) are applied to Rosenberg Self-Esteem Scale (RSES). Parameter estimations automatically give negative signs hence no recoding is necessary for negatively scored items. Bad items can be located through parameter estimate, item characteristic curves and other measures. Two factors are extracted with one on self-esteem and the other on the degree to take moderate views, with the later not often being covered in previous studies. A goodness-of-fit measure based on two-way margins is used but more works are needed. Results show that scaling provided by models with more formal statistical ground correlated highly with conventional method, which may provide justification for usual practice.
New segregation analysis of panic disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieland, V.J.; Fyer, A.J.; Chapman, T.
1996-04-09
We performed simple segregation analyses of panic disorder using 126 families of probands with DSM-III-R panic disorder who were ascertained for a family study of anxiety disorders at an anxiety disorders research clinic. We present parameter estimates for dominant, recessive, and arbitrary single major locus models without sex effects, as well as for a nongenetic transmission model, and compare these models to each other and to models obtained by other investigators. We rejected the nongenetic transmission model when comparing it to the recessive model. Consistent with some previous reports, we find comparable support for dominant and recessive models, and inmore » both cases estimate nonzero phenocopy rates. The effect of restricting the analysis to families of probands without any lifetime history of comorbid major depression (MDD) was also examined. No notable differences in parameter estimates were found in that subsample, although the power of that analysis was low. Consistency between the findings in our sample and in another independently collected sample suggests the possibility of pooling such samples in the future in order to achieve the necessary power for more complex analyses. 32 refs., 4 tabs.« less
Thompson, Christin Ann; Hay, Joel W
2015-07-01
More research is needed on the health effects of marijuana use. Results of previous studies indicate that marijuana could alleviate certain factors of metabolic syndrome, such as obesity. Data on 6281 persons from National Health and Nutrition Examination Survey from 2005 to 2012 were used to estimate the effect of marijuana use on cardiometabolic risk factors. The reliability of ordinary least squares (OLS) regression models was tested by replacing marijuana use as the risk factor of interest with alcohol and carbohydrate consumption. Instrumental variable methods were used to account for the potential endogeneity of marijuana use. OLS models show lower fasting insulin, insulin resistance, body mass index, and waist circumference in users compared with nonusers. However, when alcohol and carbohydrate intake substitute for marijuana use in OLS models, similar metabolic benefits are estimated. The Durbin-Wu-Hausman tests provide evidence of endogeneity of marijuana use in OLS models, but instrumental variables models do not yield significant estimates for marijuana use. These findings challenge the robustness of OLS estimates of a positive relationship between marijuana use and fasting insulin, insulin resistance, body mass index, and waist circumference. Copyright © 2015 Elsevier Inc. All rights reserved.
Probabilistic estimates of drought impacts on agricultural production
NASA Astrophysics Data System (ADS)
Madadgar, Shahrbanou; AghaKouchak, Amir; Farahmand, Alireza; Davis, Steven J.
2017-08-01
Increases in the severity and frequency of drought in a warming climate may negatively impact agricultural production and food security. Unlike previous studies that have estimated agricultural impacts of climate condition using single-crop yield distributions, we develop a multivariate probabilistic model that uses projected climatic conditions (e.g., precipitation amount or soil moisture) throughout a growing season to estimate the probability distribution of crop yields. We demonstrate the model by an analysis of the historical period 1980-2012, including the Millennium Drought in Australia (2001-2009). We find that precipitation and soil moisture deficit in dry growing seasons reduced the average annual yield of the five largest crops in Australia (wheat, broad beans, canola, lupine, and barley) by 25-45% relative to the wet growing seasons. Our model can thus produce region- and crop-specific agricultural sensitivities to climate conditions and variability. Probabilistic estimates of yield may help decision-makers in government and business to quantitatively assess the vulnerability of agriculture to climate variations. We develop a multivariate probabilistic model that uses precipitation to estimate the probability distribution of crop yields. The proposed model shows how the probability distribution of crop yield changes in response to droughts. During Australia's Millennium Drought precipitation and soil moisture deficit reduced the average annual yield of the five largest crops.
Microstructure development in Kolmogorov, Johnson-Mehl, and Avrami nucleation and growth kinetics
NASA Astrophysics Data System (ADS)
Pineda, Eloi; Crespo, Daniel
1999-08-01
A statistical model with the ability to evaluate the microstructure developed in nucleation and growth kinetics is built in the framework of the Kolmogorov, Johnson-Mehl, and Avrami theory. A populational approach is used to compute the observed grain-size distribution. The impingement process which delays grain growth is analyzed, and the effective growth rate of each population is estimated considering the previous grain history. The proposed model is integrated for a wide range of nucleation and growth protocols, including constant nucleation, pre-existing nuclei, and intermittent nucleation with interface or diffusion-controlled grain growth. The results are compared with Monte Carlo simulations, giving quantitative agreement even in cases where previous models fail.
A hybrid double-observer sightability model for aerial surveys
Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine
2013-01-01
Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.
Petrie, Joshua G; Eisenberg, Marisa C; Ng, Sophia; Malosh, Ryan E; Lee, Kyu Han; Ohmit, Suzanne E; Monto, Arnold S
2017-12-15
Household cohort studies are an important design for the study of respiratory virus transmission. Inferences from these studies can be improved through the use of mechanistic models to account for household structure and risk as an alternative to traditional regression models. We adapted a previously described individual-based transmission hazard (TH) model and assessed its utility for analyzing data from a household cohort maintained in part for study of influenza vaccine effectiveness (VE). Households with ≥4 individuals, including ≥2 children <18 years of age, were enrolled and followed during the 2010-2011 influenza season. VE was estimated in both TH and Cox proportional hazards (PH) models. For each individual, TH models estimated hazards of infection from the community and each infected household contact. Influenza A(H3N2) infection was laboratory-confirmed in 58 (4%) subjects. VE estimates from both models were similarly low overall (Cox PH: 20%, 95% confidence interval: -57, 59; TH: 27%, 95% credible interval: -23, 58) and highest for children <9 years of age (Cox PH: 40%, 95% confidence interval: -49, 76; TH: 52%, 95% credible interval: 7, 75). VE estimates were robust to model choice, although the ability of the TH model to accurately describe transmission of influenza presents continued opportunity for analyses. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.
Robinson, John D; Hall, David W; Wares, John P
2013-05-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.
Distance measures and optimization spaces in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.
2015-01-01
Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.
A Simultaneous Equation Demand Model for Block Rates
NASA Astrophysics Data System (ADS)
Agthe, Donald E.; Billings, R. Bruce; Dobra, John L.; Raffiee, Kambiz
1986-01-01
This paper examines the problem of simultaneous-equations bias in estimation of the water demand function under an increasing block rate structure. The Hausman specification test is used to detect the presence of simultaneous-equations bias arising from correlation of the price measures with the regression error term in the results of a previously published study of water demand in Tucson, Arizona. An alternative simultaneous equation model is proposed for estimating the elasticity of demand in the presence of block rate pricing structures and availability of service charges. This model is used to reestimate the price and rate premium elasticities of demand in Tucson, Arizona for both the usual long-run static model and for a simple short-run demand model. The results from these simultaneous equation models are consistent with a priori expectations and are unbiased.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Lee, Peter N; Fry, John S; Thornton, Alison J
2014-02-01
We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data
NASA Technical Reports Server (NTRS)
Barnes, J. R.
1993-01-01
Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
NASA Astrophysics Data System (ADS)
Seto, S.; Takahashi, T.
2017-12-01
In the 2011 Tohoku earthquake tsunami disaster, the delay of understanding damage situation increased the human damage. To solve this problem, it is important to search the severe damaged areas. The tsunami numerical modeling is useful to estimate damages and the accuracy of simulation depends on the tsunami source. Seto and Takahashi (2017) proposed a method to estimate the characterized tsunami source model by using the limited observed data of GPS buoys. The model consists of Large slip zone (LSZ), Super large slip zone (SLSZ) and background rupture zone (BZ) as the Cabinet Office, Government of Japan (below COGJ) reported after the Tohoku tsunami. At the beginning of this method, the rectangular fault model is assumed based on the seismic magnitude and hypocenter reported right after an earthquake. By using the fault model, tsunami propagation is simulated numerically, and the fault model is improved after comparing the computed data with the observed data repeatedly. In the comparison, correlation coefficient and regression coefficient are used as indexes. They are calculated with the observed and the computed tsunami wave profiles. This repetition is conducted to get the two coefficients close to 1.0, which makes the precise of the fault model higher. However, it was indicated as the improvement that the model did not examine a complicated shape of tsunami source. In this study, we proposed an improved model to examine the complicated shape. COGJ(2012) assumed that possible tsunami source region in the Nankai trough consisted of the several thousands small faults. And, we use these small faults to estimate the targeted tsunami source in this model. Therefore, we can estimate the complicated tsunami source by using these small faults. The estimation of BZ is carried out as a first step, and LSZ and SLSZ are estimated next as same as the previous model. The proposed model by using GPS buoy was applied for a tsunami scenario in the Nankai Trough. As a result, the final estimated location of LSZ and SLSZ in BZ are estimated well.
Tank waste remediation system baseline tank waste inventory estimates for fiscal year 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shelton, L.W., Westinghouse Hanford
1996-12-06
A set of tank-by-tank waste inventories is derived from historical waste models, flowsheet records, and analytical data to support the Tank Waste Remediation System flowsheet and retrieval sequence studies. Enabling assumptions and methodologies used to develop the inventories are discussed. These provisional inventories conform to previously established baseline inventories and are meant to serve as an interim basis until standardized inventory estimates are made available.
2011-09-01
tectonically active regions such as the Middle East. For example, we previously applied the code to determine the crust and upper mantle structure...Objective Optimization (MOO) for Multiple Datasets The primary goal of our current project is to develop a tool for estimating crustal structure that...be used to obtain crustal velocity structures by modeling broadband waveform, receiver function, and surface wave dispersion data. The code has been
Fahlman, A; Hooker, S K; Olszowka, A; Bostrom, B L; Jones, D R
2009-01-01
We developed a mathematical model to investigate the effect of lung compression and collapse (pulmonary shunt) on the uptake and removal of O(2), CO(2) and N(2) in blood and tissue of breath-hold diving mammals. We investigated the consequences of pressure (diving depth) and respiratory volume on pulmonary shunt and gas exchange as pressure compressed the alveoli. The model showed good agreement with previous studies of measured arterial O(2) tensions (Pa(O)(2)) from freely diving Weddell seals and measured arterial and venous N(2) tensions from captive elephant seals compressed in a hyperbaric chamber. Pulmonary compression resulted in a rapid spike in Pa(O)(2) and arterial CO(2) tension, followed by cyclical variation with a periodicity determined by Q(tot). The model showed that changes in diving lung volume are an efficient behavioural means to adjust the extent of gas exchange with depth. Differing models of lung compression and collapse depth caused major differences in blood and tissue N(2) estimates. Our integrated modelling approach contradicted predictions from simple models, and emphasised the complex nature of physiological interactions between circulation, lung compression and gas exchange. Overall, our work suggests the need for caution in interpretation of previous model results based on assumed collapse depths and all-or-nothing lung collapse models.
Bellan, Steve E.; Dushoff, Jonathan; Galvani, Alison P.; Meyers, Lauren Ancel
2015-01-01
Background The infectivity of the HIV-1 acute phase has been directly measured only once, from a retrospectively identified cohort of serodiscordant heterosexual couples in Rakai, Uganda. Analyses of this cohort underlie the widespread view that the acute phase is highly infectious, even more so than would be predicted from its elevated viral load, and that transmission occurring shortly after infection may therefore compromise interventions that rely on diagnosis and treatment, such as antiretroviral treatment as prevention (TasP). Here, we re-estimate the duration and relative infectivity of the acute phase, while accounting for several possible sources of bias in published estimates, including the retrospective cohort exclusion criteria and unmeasured heterogeneity in risk. Methods and Findings We estimated acute phase infectivity using two approaches. First, we combined viral load trajectories and viral load-infectivity relationships to estimate infectivity trajectories over the course of infection, under the assumption that elevated acute phase infectivity is caused by elevated viral load alone. Second, we estimated the relative hazard of transmission during the acute phase versus the chronic phase (RHacute) and the acute phase duration (d acute) by fitting a couples transmission model to the Rakai retrospective cohort using approximate Bayesian computation. Our model fit the data well and accounted for characteristics overlooked by previous analyses, including individual heterogeneity in infectiousness and susceptibility and the retrospective cohort's exclusion of couples that were recorded as serodiscordant only once before being censored by loss to follow-up, couple dissolution, or study termination. Finally, we replicated two highly cited analyses of the Rakai data on simulated data to identify biases underlying the discrepancies between previous estimates and our own. From the Rakai data, we estimated RHacute = 5.3 (95% credibility interval [95% CrI]: 0.79–57) and d acute = 1.7 mo (95% CrI: 0.55–6.8). The wide credibility intervals reflect an inability to distinguish a long, mildly infectious acute phase from a short, highly infectious acute phase, given the 10-mo Rakai observation intervals. The total additional risk, measured as excess hazard-months attributable to the acute phase (EHMacute) can be estimated more precisely: EHMacute = (RHacute - 1) × d acute, and should be interpreted with respect to the 120 hazard-months generated by a constant untreated chronic phase infectivity over 10 y of infection. From the Rakai data, we estimated that EHMacute = 8.4 (95% CrI: -0.27 to 64). This estimate is considerably lower than previously published estimates, and consistent with our independent estimate from viral load trajectories, 5.6 (95% confidence interval: 3.3–9.1). We found that previous overestimates likely stemmed from failure to account for risk heterogeneity and bias resulting from the retrospective cohort study design. Our results reflect the interaction between the retrospective cohort exclusion criteria and high (47%) rates of censorship amongst incident serodiscordant couples in the Rakai study due to loss to follow-up, couple dissolution, or study termination. We estimated excess physiological infectivity during the acute phase from couples data, but not the proportion of transmission attributable to the acute phase, which would require data on the broader population's sexual network structure. Conclusions Previous EHMacute estimates relying on the Rakai retrospective cohort data range from 31 to 141. Our results indicate that these are substantial overestimates of HIV-1 acute phase infectivity, biased by unmodeled heterogeneity in transmission rates between couples and by inconsistent censoring. Elevated acute phase infectivity is therefore less likely to undermine TasP interventions than previously thought. Heterogeneity in infectiousness and susceptibility may still play an important role in intervention success and deserves attention in future analyses PMID:25781323
Bellan, Steve E; Dushoff, Jonathan; Galvani, Alison P; Meyers, Lauren Ancel
2015-03-01
The infectivity of the HIV-1 acute phase has been directly measured only once, from a retrospectively identified cohort of serodiscordant heterosexual couples in Rakai, Uganda. Analyses of this cohort underlie the widespread view that the acute phase is highly infectious, even more so than would be predicted from its elevated viral load, and that transmission occurring shortly after infection may therefore compromise interventions that rely on diagnosis and treatment, such as antiretroviral treatment as prevention (TasP). Here, we re-estimate the duration and relative infectivity of the acute phase, while accounting for several possible sources of bias in published estimates, including the retrospective cohort exclusion criteria and unmeasured heterogeneity in risk. We estimated acute phase infectivity using two approaches. First, we combined viral load trajectories and viral load-infectivity relationships to estimate infectivity trajectories over the course of infection, under the assumption that elevated acute phase infectivity is caused by elevated viral load alone. Second, we estimated the relative hazard of transmission during the acute phase versus the chronic phase (RHacute) and the acute phase duration (dacute) by fitting a couples transmission model to the Rakai retrospective cohort using approximate Bayesian computation. Our model fit the data well and accounted for characteristics overlooked by previous analyses, including individual heterogeneity in infectiousness and susceptibility and the retrospective cohort's exclusion of couples that were recorded as serodiscordant only once before being censored by loss to follow-up, couple dissolution, or study termination. Finally, we replicated two highly cited analyses of the Rakai data on simulated data to identify biases underlying the discrepancies between previous estimates and our own. From the Rakai data, we estimated RHacute = 5.3 (95% credibility interval [95% CrI]: 0.79-57) and dacute = 1.7 mo (95% CrI: 0.55-6.8). The wide credibility intervals reflect an inability to distinguish a long, mildly infectious acute phase from a short, highly infectious acute phase, given the 10-mo Rakai observation intervals. The total additional risk, measured as excess hazard-months attributable to the acute phase (EHMacute) can be estimated more precisely: EHMacute = (RHacute - 1) × dacute, and should be interpreted with respect to the 120 hazard-months generated by a constant untreated chronic phase infectivity over 10 y of infection. From the Rakai data, we estimated that EHMacute = 8.4 (95% CrI: -0.27 to 64). This estimate is considerably lower than previously published estimates, and consistent with our independent estimate from viral load trajectories, 5.6 (95% confidence interval: 3.3-9.1). We found that previous overestimates likely stemmed from failure to account for risk heterogeneity and bias resulting from the retrospective cohort study design. Our results reflect the interaction between the retrospective cohort exclusion criteria and high (47%) rates of censorship amongst incident serodiscordant couples in the Rakai study due to loss to follow-up, couple dissolution, or study termination. We estimated excess physiological infectivity during the acute phase from couples data, but not the proportion of transmission attributable to the acute phase, which would require data on the broader population's sexual network structure. Previous EHMacute estimates relying on the Rakai retrospective cohort data range from 31 to 141. Our results indicate that these are substantial overestimates of HIV-1 acute phase infectivity, biased by unmodeled heterogeneity in transmission rates between couples and by inconsistent censoring. Elevated acute phase infectivity is therefore less likely to undermine TasP interventions than previously thought. Heterogeneity in infectiousness and susceptibility may still play an important role in intervention success and deserves attention in future analyses.
Estimation of flood-frequency characteristics of small urban streams in North Carolina
Robbins, J.C.; Pope, B.F.
1996-01-01
A statewide study was conducted to develop methods for estimating the magnitude and frequency of floods of small urban streams in North Carolina. This type of information is critical in the design of bridges, culverts and water-control structures, establishment of flood-insurance rates and flood-plain regulation, and for other uses by urban planners and engineers. Concurrent records of rainfall and runoff data collected in small urban basins were used to calibrate rainfall-runoff models. Historic rain- fall records were used with the calibrated models to synthesize a long- term record of annual peak discharges. The synthesized record of annual peak discharges were used in a statistical analysis to determine flood- frequency distributions. These frequency distributions were used with distributions from previous investigations to develop a database for 32 small urban basins in the Blue Ridge-Piedmont, Sand Hills, and Coastal Plain hydrologic areas. The study basins ranged in size from 0.04 to 41.0 square miles. Data describing the size and shape of the basin, level of urban development, and climate and rural flood charac- teristics also were included in the database. Estimation equations were developed by relating flood-frequency char- acteristics to basin characteristics in a generalized least-squares regression analysis. The most significant basin characteristics are drainage area, impervious area, and rural flood discharge. The model error and prediction errors for the estimating equations were less than those for the national flood-frequency equations previously reported. Resulting equations, which have prediction errors generally less than 40 percent, can be used to estimate flood-peak discharges for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals for small urban basins across the State assuming negligible, sustainable, in- channel detention or basin storage.
Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer
NASA Astrophysics Data System (ADS)
Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.
2015-04-01
The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.
New Methodology for Estimating Fuel Economy by Vehicle Class
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling
2011-01-01
Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less
NASA Astrophysics Data System (ADS)
Lunt, Mark; Rigby, Matt; Manning, Alistair; O'Doherty, Simon; Stavert, Ann; Stanley, Kieran; Young, Dickon; Pitt, Joseph; Bauguitte, Stephane; Allen, Grant; Helfter, Carole; Palmer, Paul
2017-04-01
The Greenhouse gAs Uk and Global Emissions (GAUGE) project aims to quantify the magnitude and uncertainty of key UK greenhouse gas emissions more robustly than previously achieved. Measurements of methane have been taken from a number of tall-tower and surface sites as well as mobile measurement platforms such as a research aircraft and a ferry providing regular transects off the east coast of the UK. Using the UK Met Office's atmospheric transport model, NAME, and a novel Bayesian inversion technique we present estimates of methane emissions from the UK from a number of different combinations of sites to show the robustness of the UK total emissions to network configuration. The impact on uncertainties will be discussed, focusing on the usefulness of the various measurement platforms for constraining UK emissions. We will examine the effects of observation selection and how a priori assumptions about model uncertainty can affect the emission estimates, even within a data-driven hierarchical inversion framework. Finally, we will show the impact of the resolution of the meteorology used to drive the NAME model on emissions estimates, and how to rationalise our understanding of the ability of transport models to represent reality.
A scenario and forecast model for Gulf of Mexico hypoxic area and volume
Scavia, Donald; Evans, Mary Anne; Obenour, Daniel R.
2013-01-01
For almost three decades, the relative size of the hypoxic region on the Louisiana-Texas continental shelf has drawn scientific and policy attention. During that time, both simple and complex models have been used to explore hypoxia dynamics and to provide management guidance relating the size of the hypoxic zone to key drivers. Throughout much of that development, analyses had to accommodate an apparent change in hypoxic sensitivity to loads and often cull observations due to anomalous meteorological conditions. Here, we describe an adaptation of our earlier, simple biophysical model, calibrated to revised hypoxic area estimates and new hypoxic volume estimates through Bayesian estimation. This application eliminates the need to cull observations and provides revised hypoxic extent estimates with uncertainties, corresponding to different nutrient loading reduction scenarios. We compare guidance from this model application, suggesting an approximately 62% nutrient loading reduction is required to reduce Gulf hypoxia to the Action Plan goal of 5,000 km2, to that of previous applications. In addition, we describe for the first time, the corresponding response of hypoxic volume. We also analyze model results to test for increasing system sensitivity to hypoxia formation, but find no strong evidence of such change.
Hwang, Jin-Ha; Lee, Deuck Hang; Ju, Hyunjin; Kim, Kang Su; Seo, Soo-Yeon; Kang, Joo-Won
2013-10-23
Recognizing that steel fibers can supplement the brittle tensile characteristics of concrete, many studies have been conducted on the shear performance of steel fiber reinforced concrete (SFRC) members. However, previous studies were mostly focused on the shear strength and proposed empirical shear strength equations based on their experimental results. Thus, this study attempts to estimate the strains and stresses in steel fibers by considering the detailed characteristics of steel fibers in SFRC members, from which more accurate estimation on the shear behavior and strength of SFRC members is possible, and the failure mode of steel fibers can be also identified. Four shear behavior models for SFRC members have been proposed, which have been modified from the softened truss models for reinforced concrete members, and they can estimate the contribution of steel fibers to the total shear strength of the SFRC member. The performances of all the models proposed in this study were also evaluated by a large number of test results. The contribution of steel fibers to the shear strength varied from 5% to 50% according to their amount, and the most optimized volume fraction of steel fibers was estimated as 1%-1.5%, in terms of shear performance.
The Impact of Water Loading on Estimates of Postglacial Decay Times in Hudson Bay
NASA Astrophysics Data System (ADS)
Han, H. K.; Gomez, N. A.
2016-12-01
Ongoing glacial isostatic adjustment (GIA) due to surface loading (ice and water) variations since the Last Glacial Maximum (LGM) has been contributing to sea level changes globally throughout the Holocene, especially in regions like the Canada that were heavily glaciated during the LGM. The spatial and temporal distribution of GIA and relative sea level change are attributed to the ice history and the rheological structure of the solid Earth, both of which are uncertain. It has been shown that relative sea level curves in previously glaciated regions follow an exponential-like form, and the post glacial decay times associated with that form have weak sensitivity to the details of the ice loading history (Andrews 1970, Walcott 1980, Mitrovica & Peltier 1995). Post glacial decay time estimates may therefore be used to constrain the Earth's structure and improve GIA predictions. However, estimates of decay times in Hudson Bay in the literature differ significantly due to a number of sources of uncertainty and bias (Mitrovica et al. 2000). Previous decay time analyses have not considered the potential bias that surface loading associated with Holocene sea level changes can introduce in decay time estimates derived from nearby relative sea level observations. We explore the spatial patterns of post glacial decay time predictions in previously glaciated regions, and their sensitivity to ice and water loading history. We compute post glacial sea level changes over the last deglaciation from 21ka to the modern associated with the ICE5G (Peltier, 2004) and ICE6G (Argus et al. 2014, Peltier et al. 2015) ice history models. We fit exponential curves to the modeled relative sea level changes, and compute maps of post glacial decay time predictions across North America and the Arctic. In addition, we decompose the modeled relative sea level changes into contributions from water and ice loading effects, and compute the impact of water loading redistribution since the LGM on present day decay times. We show that Holocene water loading in the Hudson Bay may introduce significant bias in decay time estimates and we highlight locations where biases are minimized.
The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process
ERIC Educational Resources Information Center
Murphy, Daniel L.; Pituch, Keenan A.
2009-01-01
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database
NASA Technical Reports Server (NTRS)
Hanke, Jeremy L.
2011-01-01
The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.
Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel
2010-01-30
In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.
Estimating Single-Event Logic Cross Sections in Advanced Technologies
NASA Astrophysics Data System (ADS)
Harrington, R. C.; Kauppila, J. S.; Warren, K. M.; Chen, Y. P.; Maharrey, J. A.; Haeffner, T. D.; Loveless, T. D.; Bhuva, B. L.; Bounasser, M.; Lilja, K.; Massengill, L. W.
2017-08-01
Reliable estimation of logic single-event upset (SEU) cross section is becoming increasingly important for predicting the overall soft error rate. As technology scales and single-event transient (SET) pulse widths shrink to widths on the order of the setup-and-hold time of flip-flops, the probability of latching an SET as an SEU must be reevaluated. In this paper, previous assumptions about the relationship of SET pulsewidth to the probability of latching an SET are reconsidered and a model for transient latching probability has been developed for advanced technologies. A method using the improved transient latching probability and SET data is used to predict logic SEU cross section. The presented model has been used to estimate combinational logic SEU cross sections in 32-nm partially depleted silicon-on-insulator (SOI) technology given experimental heavy-ion SET data. Experimental SEU data show good agreement with the model presented in this paper.
NASA Astrophysics Data System (ADS)
Camacho Suarez, V. V.; Shucksmith, J.; Schellart, A.
2016-12-01
Analytical and numerical models can be used to represent the advection-dispersion processes governing the transport of pollutants in rivers (Fan et al., 2015; Van Genuchten et al., 2013). Simplifications, assumptions and parameter estimations in these models result in various uncertainties within the modelling process and estimations of pollutant concentrations. In this study, we explore both: 1) the structural uncertainty due to the one dimensional simplification of the Advection Dispersion Equation (ADE) and 2) the parameter uncertainty due to the semi empirical estimation of the longitudinal dispersion coefficient. The relative significance of these uncertainties has not previously been examined. By analysing both the relative structural uncertainty of analytical solutions of the ADE, and the parameter uncertainty due to the longitudinal dispersion coefficient via a Monte Carlo analysis, an evaluation of the dominant uncertainties for a case study in the river Chillan, Chile is presented over a range of spatial scales.
Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M
2012-10-01
With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.
ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.
Maghrabi, Ali H A; McGuffin, Liam J
2017-07-03
Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
NASA Astrophysics Data System (ADS)
Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka
2017-07-01
This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.
NASA Astrophysics Data System (ADS)
Uieda, Leonardo; Barbosa, Valéria C. F.
2017-01-01
Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.
NASA Astrophysics Data System (ADS)
Ni, C.; Huang, Y.; Lu, C.
2012-12-01
The pumping-induced land subsidence events are typically founded in coastal aquifers in Taiwan especially in the areas of lower alluvial fans. Previous investigations have recognized the irreversible situation for an aquifer deformation even if the pumped water is significantly reduced or stopped. Long-term monitoring projects on land subsidence in Choshui alluvial fan in central Taiwan have improved the understanding of the deformations in the aquifer system. To characterization the detailed land subsidence mechanism, this study develops an inverse numerical model to estimate the deformation parameters such as the specific storage (Ss) and vertical hydraulic conductivity (Kv) for interbeds. Similar to the concept of Hydraulic tomography survey (HTS), the developed model employs the iterative cokriging estimator to improve the accuracy of estimating deformation parameters. A one-dimensional numerical example is employed to assess the accuracy of the developed inverse model. The developed model is then applied to field-scale data from compaction monitoring wells (CMW) installed in the lower Choshui River fan. Results of the synthetic example show that the developed inverse model can reproduce well the predefined geologic features of the synthetic aquifer. The model provides better estimations of Kv patterns and magnitudes. Slightly less detail of the Ss was obtained due to the insensitivity of transient stresses for specified sampling times. Without prior information from field measurements, the developed model associated with deformation measurements form CMW can estimate Kv and Ss fields with great spatial resolution.
A regional high-resolution carbon flux inversion of North America for 2004
NASA Astrophysics Data System (ADS)
Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.
2010-05-01
Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.
Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.
2013-01-01
This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151
A new model for yaw attitude of Global Positioning System satellites
NASA Technical Reports Server (NTRS)
Bar-Sever, Y. E.
1995-01-01
Proper modeling of the Global Positioning System (GPS) satellite yaw attitude is important in high-precision applications. A new model for the GPS satellite yaw attitude is introduced that constitutes a significant improvement over the previously available model in terms of efficiency, flexibility, and portability. The model is described in detail, and implementation issues, including the proper estimation strategy, are addressed. The performance of the new model is analyzed, and an error budget is presented. This is the first self-contained description of the GPS yaw attitude model.
Updated Global Burden of Cholera in Endemic Countries
Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.
2015-01-01
Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000
NASA Technical Reports Server (NTRS)
Smith, D. E.; Lemoine, F. G.; Fricke, S. K.
1994-01-01
We have estimated the mass of Phobos, Deimos, and Mars using the Viking Orbiter and Mariner 9 tracking data. We divided the data into 282 arcs and sorted the data by periapse height, by inclination, and by satellite. The data were processed with the GEODYN/SOLVE orbit determination programs, which have previously been used to analyze planetary tracking data. The a priori Mars gravity field applied in this study was the 50th degree and order GMM-1 (Goddard Mars Model-1) model. The subsets of data were carefully edited to remove any arcs with close encounters of less than 500 km with either Phobos or Deimos. Whereas previous investigators have used close flybys (less than 500 km) to estimate the satellite masses, we have attempted to estimate the masses of Phobos and Deimos from multiday arcs which only included more distant encounters. The subsets of data were further edited to eliminate spurious data near solar conjunction (Nov.-Dec. 1976 and January 1979). In addition, the Viking-1 data from Oct. through Dec. 1978 were also excluded because of the low periapse altitude (as low as 232 km) and thus high sensitivity to atmospheric drag.
Updated energy budgets for neural computation in the neocortex and cerebellum
Howarth, Clare; Gleeson, Padraig; Attwell, David
2012-01-01
The brain's energy supply determines its information processing power, and generates functional imaging signals. The energy use on the different subcellular processes underlying neural information processing has been estimated previously for the grey matter of the cerebral and cerebellar cortex. However, these estimates need reevaluating following recent work demonstrating that action potentials in mammalian neurons are much more energy efficient than was previously thought. Using this new knowledge, this paper provides revised estimates for the energy expenditure on neural computation in a simple model for the cerebral cortex and a detailed model of the cerebellar cortex. In cerebral cortex, most signaling energy (50%) is used on postsynaptic glutamate receptors, 21% is used on action potentials, 20% on resting potentials, 5% on presynaptic transmitter release, and 4% on transmitter recycling. In the cerebellar cortex, excitatory neurons use 75% and inhibitory neurons 25% of the signaling energy, and most energy is used on information processing by non-principal neurons: Purkinje cells use only 15% of the signaling energy. The majority of cerebellar signaling energy use is on the maintenance of resting potentials (54%) and postsynaptic receptors (22%), while action potentials account for only 17% of the signaling energy use. PMID:22434069
NASA Technical Reports Server (NTRS)
Behrenfeld, Michael J.; Maranon, Emilio; Siegel, David A.; Hooker, Stanford B.
2000-01-01
In this report, we describe a new model (the 'PhotoAcc' model) for estimating changes in the light-saturated rate of chlorophyll-normalized phytoplankton carbon fixation (Pbmax). The model is based on measurements conducted during the Atlantic Meridional Transect studies and the Bermuda Time Series program. The PhotoAcc model explained 64% to 82% of the observed variability in Pbmax for our data set, whereas none of the previously published Pbmax models described over the past 44 years explained any of the variance. The significance of this result is that a primary limiting factor for extracting ocean carbon fixation rates from satellite measurements of near surface chlorophyll has been errors in the estimate of Pbmax. Our new model should thus result in much improved calculations of oceanic photosynthesis and thus the role of the oceans in the global carbon cycle.
Wu, Liviawati; Mould, Diane R; Perez Ruixo, Juan Jose; Doshi, Sameer
2015-10-01
A population pharmacokinetic pharmacodynamic (PK/PD) model describing the effect of epoetin alfa on hemoglobin (Hb) response in hemodialysis patients was developed. Epoetin alfa pharmacokinetics was described using a linear 2-compartment model. PK parameter estimates were similar to previously reported values. A maturation-structured cytokinetic model consisting of 5 compartments linked in a catenary fashion by first-order cell transfer rates following a zero-order input process described the Hb time course. The PD model described 2 subpopulations, one whose Hb response reflected epoetin alfa dosing and a second whose response was unrelated to epoetin alfa dosing. Parameter estimates from the PK/PD model were physiologically reasonable and consistent with published reports. Numerical and visual predictive checks using data from 2 studies were performed. The PK and PD of epoetin alfa were well described by the model. © 2015, The American College of Clinical Pharmacology.
Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth
2017-10-01
The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p < 0.05) when regressed separately. When included without an interaction term, only the coefficient for VA in the BSE was significant (p = 0.04), but when an interaction term between VA in the BSE and WSE was included, only the constant term (mean TTO utility value) was significant, potentially a result of the collinearity between the VA of the two eyes. The lack of both formal model fit statistics from the GEE approach and theoretical knowledge to support the superiority of one model over another make it difficult to select the best model. Limitations of this analysis arise from the potential influence of collinearity between the VA of both eyes, and the use of contact lenses to reflect VA states to obtain the original dataset. Whilst further research is required to elicit more accurate utility values for wet AMD, this novel regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.
NASA Astrophysics Data System (ADS)
Gleason, C. J.; Wada, Y.; Wang, J.
2017-12-01
Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally, especially in international river basins. Remote sensing and water balance modelling are frequently cited as a potential solutions, but these techniques largely rely on the same in decline gauge data to constrain or parameterize discharge estimates, thus creating a circular approach to estimating discharge inapplicable to ungauged basins. To address this, we here combine a discontinued gauge, remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and Landsat data, and the PCR-GLOBWB hydrological model to estimate discharge for an ungauged time period for the Lower Nile (1978-present). Specifically, we first estimate initial discharges from 86 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the hydrologic model. Our tuning methodology is purposefully simple and can be easily applied to any model without the need for calibration/parameterization. The resulting tuned modelled hydrograph shows large improvement in flow magnitude over previous modelled hydrographs, and validation of tuned monthly model output flows against the historical gauge yields an RMSE of 343 m3/s (33.7%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: modelled flows have a one-to two-month wet season lag and a negative bias. More sophisticated model calibration and training (e.g. data assimilation) is needed to improve upon our results, however, our results achieved by coupling physical models and remote sensing is a promising first step and proof of concept toward future modelling of ungauged flows. This is especially true as massive cloud computing via Google Earth Engine makes our method easily applicable to any basin without current gauges. Finally, we purposefully do not offer prescriptive solutions for Nile management, and rather hope that the methods demonstrated herein can prove useful to river stakeholders in managing their own water.
[The effect of tobacco prices on consumption: a time series data analysis for Mexico].
Olivera-Chávez, Rosa Itandehui; Cermeño-Bazán, Rodolfo; de Miera-Juárez, Belén Sáenz; Jiménez-Ruiz, Jorge Alberto; Reynales-Shigematsu, Luz Myriam
2010-01-01
To estimate the price elasticity of the demand for cigarettes in Mexico based on data sources and a methodology different from the ones used in previous studies on the topic. Quarterly time series of consumption, income and price for the time period 1994 to 2005 were used. A long-run demand model was estimated using Ordinary Least Squares (OLS) and the existence of a cointegration relationship was investigated. Also, a model using Dinamic Ordinary Least Squares (DOLS) was estimated to correct for potential endogeneity of independent variables and autocorrelation of the residuals. DOLS estimates showed that a 10% increase in cigarette prices could reduce consumption in 2.5% (p<0.05) and increase government revenue in 16.11%. The results confirmed the effectiveness of taxes as an instrument for tobacco control in Mexico. An increase in taxes can be used to increase cigarette prices and therefore to reduce consumption and increase government revenue.
Estimation of Carcinogenicity using Hierarchical Clustering and Nearest Neighbor Methodologies
Previously a hierarchical clustering (HC) approach and a nearest neighbor (NN) approach were developed to model acute aquatic toxicity end points. These approaches were developed to correlate the toxicity for large, noncongeneric data sets. In this study these approaches applie...
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
Global assessment of the effect of climate change on ammonia emissions from seabirds
NASA Astrophysics Data System (ADS)
Riddick, Stuart N.; Dragosits, Ulrike; Blackall, Trevor D.; Tomlinson, Sam J.; Daunt, Francis; Wanless, Sarah; Hallsworth, Stephen; Braban, Christine F.; Tang, Y. Sim; Sutton, Mark A.
2018-07-01
Seabird colonies alter the biogeochemistry of nearby ecosystems, while the associated emissions of ammonia (NH3) may cause acidification and eutrophication of finely balanced biomes. To examine the possible effects of future climate change on the magnitude and distribution of seabird NH3 emissions globally, a global seabird database was used as input to the GUANO model, a dynamic mass-flow process-based model that simulates NH3 losses from seabird colonies at an hourly resolution in relation to environmental conditions. Ammonia emissions calculated by the GUANO model were in close agreement with measured NH3 emissions across a wide range of climates. For the year 2010, the total global seabird NH3 emission is estimated at 82 [37-127] Gg year-1. This is less than previously estimated using a simple temperature-dependent empirical model, mainly due to inclusion of nitrogen wash-off from colonies during precipitation events in the GUANO model. High precipitation, especially between 40° and 60° S, results in total emissions for the penguin species that are 82% smaller than previously estimated, while for species found in dry tropical areas, emissions are 83-133% larger. Application of temperature anomalies for several IPCC scenarios for 2099 in the GUANO model indicated a predicted net increase in global seabird NH3 emissions of 27% (B1 scenario) and 39% (A2 scenario), compared with the 2010 estimates. At individual colonies, the net change was the result of influences of temperature, precipitation and relative humidity change, with smaller effects of wind-speed changes. The largest increases in NH3 emissions (mean: 60% [486 to -50] increase; A2 scenario for 2099 compared with 2010) were found for colonies 40°S to 65°N, and may lead to increased plant growth and decreased biodiversity by eliminating nitrogen sensitive plant species. Only 7% of the seabird colonies assessed globally (mainly limited to the sub-polar Southern Ocean) were estimated to experience a reduction in NH3 emission (average: -18% [-50 to 0] reduction between 2010 and 2099, A2 scenario), where an increase in precipitation was found to more than offset the effect of rising temperatures.
Improving lidar turbulence estimates for wind energy
NASA Astrophysics Data System (ADS)
Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.
2016-09-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.
Improving Lidar Turbulence Estimates for Wind Energy: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew
2016-10-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
NASA Astrophysics Data System (ADS)
Derieppe, M.; Bos, C.; de Greef, M.; Moonen, C.; de Senneville, B. Denis
2016-01-01
We have previously demonstrated the feasibility of monitoring ultrasound-mediated uptake of a hydrophilic model drug in real time with dynamic confocal fluorescence microscopy. In this study, we evaluate and correct the impact of photobleaching to improve the accuracy of pharmacokinetic parameter estimates. To model photobleaching of the fluorescent model drug SYTOX Green, a photobleaching process was added to the current two-compartment model describing cell uptake. After collection of the uptake profile, a second acquisition was performed when SYTOX Green was equilibrated, to evaluate the photobleaching rate experimentally. Photobleaching rates up to 5.0 10-3 s-1 were measured when applying power densities up to 0.2 W.cm-2. By applying the three-compartment model, the model drug uptake rate of 6.0 10-3 s-1 was measured independent of the applied laser power. The impact of photobleaching on uptake rate estimates measured by dynamic fluorescence microscopy was evaluated. Subsequent compensation improved the accuracy of pharmacokinetic parameter estimates in the cell population subjected to sonopermeabilization.
Improving Lidar Turbulence Estimates for Wind Energy
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...
2016-10-03
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P
2016-03-01
We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.
NASA Astrophysics Data System (ADS)
Nakayama, Tadanobu
2017-04-01
Recent research showed that inland water including rivers, lakes, and groundwater may play some role in carbon cycling, although its contribution has remained uncertain due to limited amount of reliable data available. In this study, the author developed an advanced model coupling eco-hydrology and biogeochemical cycle (National Integrated Catchment-based Eco-hydrology (NICE)-BGC). This new model incorporates complex coupling of hydrologic-carbon cycle in terrestrial-aquatic linkages and interplay between inorganic and organic carbon during the whole process of carbon cycling. The model could simulate both horizontal transports (export from land to inland water 2.01 ± 1.98 Pg C/yr and transported to ocean 1.13 ± 0.50 Pg C/yr) and vertical fluxes (degassing 0.79 ± 0.38 Pg C/yr, and sediment storage 0.20 ± 0.09 Pg C/yr) in major rivers in good agreement with previous researches, which was an improved estimate of carbon flux from previous studies. The model results also showed global net land flux simulated by NICE-BGC (-1.05 ± 0.62 Pg C/yr) decreased carbon sink a little in comparison with revised Lund-Potsdam-Jena Wetland Hydrology and Methane (-1.79 ± 0.64 Pg C/yr) and previous materials (-2.8 to -1.4 Pg C/yr). This is attributable to CO2 evasion and lateral carbon transport explicitly included in the model, and the result suggests that most previous researches have generally overestimated the accumulation of terrestrial carbon and underestimated the potential for lateral transport. The results further implied difference between inverse techniques and budget estimates suggested can be explained to some extent by a net source from inland water. NICE-BGC would play an important role in reevaluation of greenhouse gas budget of the biosphere, quantification of hot spots, and bridging the gap between top-down and bottom-up approaches to global carbon budget.
NASA Astrophysics Data System (ADS)
Pratama, C.; Ito, T.; Sasajima, R.; Tabei, T.; Kimata, F.; Gunawan, E.; Ohta, Y.; Yamashina, T.; Ismail, N.; Muksin, U.; Maulida, P.; Meilano, I.; Nurdin, I.; Sugiyanto, D.; Efendi, J.
2017-12-01
Postseismic deformation following the 2012 Indian Ocean earthquake has been modeled by several studies (Han et al. 2015, Hu et al. 2016, Masuti et al. 2016). Although each study used different method and dataset, the previous studies constructed a significant difference of earth structure. Han et al. (2015) ignored subducting slab beneath Sumatra while Masuti et al. (2016) neglect sphericity of the earth. Hu et al. (2016) incorporated elastic slab and spherical earth but used uniform rigidity in each layer of the model. As a result, Han et al. (2015) model estimated one order higher Maxwell viscosity than the Hu et al. (2016) and half order lower Kelvin viscosity than the Masuti et al. (2016) model predicted. In the present study, we conduct a quantitative analysis of each heterogeneous geometry and parameter effect on rheology inference. We develop heterogeneous three-dimensional spherical-earth finite element models. We investigate the effect of subducting slab, spherical earth, and three-dimensional earth rigidity on estimated lithosphere-asthenosphere rheology beneath the Indian Ocean. A wide range of viscosity structure from time constant rheology to time dependent rheology was chosen as previous studies have been modeled. In order to evaluate actual displacement, we compared the model to the Global Navigation Satellite System (GNSS) observation. We incorporate the GNSS data from previous studies and introduce new GNSS site as a part of the Indonesian Continuously Operating Reference Stations (InaCORS) located in Sumatra that has not been used in the last analysis. As a preliminary result, we obtained the effect of the spherical earth and elastic slab when we assumed burgers rheology. The model that incorporates the sphericity of the earth needs a one third order lower viscosity than the model that neglects earth curvature. The model that includes elastic slab needs half order lower viscosity than the model that excluding the elastic slab.
Constellation Program Life-cycle Cost Analysis Model (LCAM)
NASA Technical Reports Server (NTRS)
Prince, Andy; Rose, Heidi; Wood, James
2008-01-01
The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.
NASA Astrophysics Data System (ADS)
Jeong, Jina; Park, Eungyu; Shik Han, Weon; Kim, Kue-Young; Suk, Heejun; Beom Jo, Si
2018-07-01
A generalized water table fluctuation model based on precipitation was developed using a statistical conceptualization of unsaturated infiltration fluxes. A gamma distribution function was adopted as a transfer function due to its versatility in representing recharge rates with temporally dispersed infiltration fluxes, and a Laplace transformation was used to obtain an analytical solution. To prove the general applicability of the model, convergences with previous water table fluctuation models were shown as special cases. For validation, a few hypothetical cases were developed, where the applicability of the model to a wide range of unsaturated zone conditions was confirmed. For further validation, the model was applied to water table level estimations of three monitoring wells with considerably thick unsaturated zones on Jeju Island. The results show that the developed model represented the pattern of hydrographs from the two monitoring wells fairly well. The lag times from precipitation to recharge estimated from the developed system transfer function were found to agree with those from a conventional cross-correlation analysis. The developed model has the potential to be adopted for the hydraulic characterization of both saturated and unsaturated zones by being calibrated to actual data when extraneous and exogenous causes of water table fluctuation are limited. In addition, as it provides reference estimates, the model can be adopted as a tool for surveilling groundwater resources under hydraulically stressed conditions.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Observational Constraints on the Water Vapor Feedback Using GPS Radio Occultations
NASA Astrophysics Data System (ADS)
Vergados, P.; Mannucci, A. J.; Ao, C. O.; Fetzer, E. J.
2016-12-01
The air refractive index at L-band frequencies depends on the air's density and water vapor content. Exploiting these relationships, we derive a theoretical model to infer the specific humidity response to surface temperature variations, dq/dTs, given knowledge of how the air refractive index and temperature vary with surface temperature. We validate this model using 1.2-1.6 GHz Global Positioning System Radio Occultation (GPS RO) observations from 2007 to 2010 at 250 hPa, where the water vapor feedback on surface warming is strongest. Current research indicates that GPS RO data sets can capture the amount of water vapor in very dry and very moist air more efficiently than other observing platforms, possibly suggesting larger water vapor feedback than previously known. Inter-comparing the dq/dTs among different data sets will provide us with additional constraints on the water vapor feedback. The dq/dTs estimation from GPS RO observations shows excellent agreement with previously published results and the responses estimated using Atmospheric Infrared Sounder (AIRS) and NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data sets. In particular, the GPS RO-derived dq/dTs is larger by 6% than that estimated using the AIRS data set. This agrees with past evidence that AIRS may be dry-biased in the upper troposphere. Compared to the MERRA estimations, the GPS RO-derived dq/dTs is 10% smaller, also agreeing with previous results that show that MERRA may have a wet bias in the upper troposphere. Because of their high sensitivity to fractional changes in water vapor, and their inherent long-term accuracy, current and future GPS RO observations show great promise in monitoring climate feedbacks and their trends.
Associations of self estimated workloads with musculoskeletal symptoms among hospital nurses
Ando, S.; Ono, Y.; Shimaoka, M.; Hiruta, S.; Hattori, Y.; Hori, F.; Takeuchi, Y.
2000-01-01
OBJECTIVES—To investigate the prevalence of neck, shoulder, and arm pain (NSAP) as well as low back pain (LBP) among hospital nurses, and to examine the association of work tasks and self estimated risk factors with NSAP and LBP. METHODS—A cross sectional study was carried out in a national university hospital in Japan. Full time registered nurses in the wards (n=314) were selected for analysis. The questionnaire was composed of items on demographic conditions, severity of workloads in actual tasks, self estimated risk factors for fatigue, and musculoskeletal pain in the previous month. Rate ratios (RRs) and 95% confidence intervals (95% CIs) were calculated by the Cox's proportional hazards model to study the association of pain with variables related to work and demographic conditions. RESULTS—The prevalences of low back, shoulder, neck, and arm pain in the previous month were 54.7%, 42.8%, 31.3%, and 18.6%, respectively. The prevalence of musculoskeletal symptoms among hospital nurses was higher than in previous studies. In the Cox's models for LBP and NSAP, there were no significant associations between musculoskeletal pain and the items related to work and demographic conditions. The RRs for LBP tended to be relatively higher for "accepting emergency patients" and some actual tasks. Some items of self estimated risk factors for fatigue tended to have relatively higher RRs for LBP and NSAP. CONCLUSIONS—It was suggested that musculoskeletal pain among hospital nurses may have associations with some actual tasks and items related to work postures, work control, and work organisation. Further studies, however, are necessary, as clear evidence of this potential association was not shown in the study. Keywords: workloads; musculoskeletal pain; nurses PMID:10810105
NASA Astrophysics Data System (ADS)
Sanford, Ward E.
2017-03-01
The trend of decreasing permeability with depth was estimated in the fractured-rock terrain of the upper Potomac River basin in the eastern USA using model calibration on 200 water-level observations in wells and 12 base-flow observations in subwatersheds. Results indicate that permeability at the 1-10 km scale (for groundwater flowpaths) decreases by several orders of magnitude within the top 100 m of land surface. This depth range represents the transition from the weathered, fractured regolith into unweathered bedrock. This rate of decline is substantially greater than has been observed by previous investigators that have plotted in situ wellbore measurements versus depth. The difference is that regional water levels give information on kilometer-scale connectivity of the regolith and adjacent fracture networks, whereas in situ measurements give information on near-hole fractures and fracture networks. The approach taken was to calibrate model layer-to-layer ratios of hydraulic conductivity (LLKs) for each major rock type. Most rock types gave optimal LLK values of 40-60, where each layer was twice a thick as the one overlying it. Previous estimates of permeability with depth from deeper data showed less of a decline at <300 m than the regional modeling results. There was less certainty in the modeling results deeper than 200 m and for certain rock types where fewer water-level observations were available. The results have implications for improved understanding of watershed-scale groundwater flow and transport, such as for the timing of the migration of pollutants from the water table to streams.
Sanford, Ward E.
2017-01-01
The trend of decreasing permeability with depth was estimated in the fractured-rock terrain of the upper Potomac River basin in the eastern USA using model calibration on 200 water-level observations in wells and 12 base-flow observations in subwatersheds. Results indicate that permeability at the 1–10 km scale (for groundwater flowpaths) decreases by several orders of magnitude within the top 100 m of land surface. This depth range represents the transition from the weathered, fractured regolith into unweathered bedrock. This rate of decline is substantially greater than has been observed by previous investigators that have plotted in situ wellbore measurements versus depth. The difference is that regional water levels give information on kilometer-scale connectivity of the regolith and adjacent fracture networks, whereas in situ measurements give information on near-hole fractures and fracture networks. The approach taken was to calibrate model layer-to-layer ratios of hydraulic conductivity (LLKs) for each major rock type. Most rock types gave optimal LLK values of 40–60, where each layer was twice a thick as the one overlying it. Previous estimates of permeability with depth from deeper data showed less of a decline at <300 m than the regional modeling results. There was less certainty in the modeling results deeper than 200 m and for certain rock types where fewer water-level observations were available. The results have implications for improved understanding of watershed-scale groundwater flow and transport, such as for the timing of the migration of pollutants from the water table to streams.
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Matter, A.; Falke, Jeffrey A.; López, J. Andres; Savereide, James W.
2018-01-01
Identification and protection of water bodies used by anadromous species are critical in light of increasing threats to fish populations, yet often challenging given budgetary and logistical limitations. Noninvasive, rapid‐assessment, sampling techniques may reduce costs and effort while increasing species detection efficiencies. We used an intrinsic potential (IP) habitat model to identify high‐quality rearing habitats for Chinook Salmon Oncorhynchus tshawytscha and select sites to sample throughout the Chena River basin, Alaska, for juvenile occupancy using an environmental DNA (eDNA) approach. Water samples were collected from 75 tributary sites in 2014 and 2015. The presence of Chinook Salmon DNA in water samples was assessed using a species‐specific quantitative PCR (qPCR) assay. The IP model predicted over 900 stream kilometers in the basin to support high‐quality (IP ≥ 0.75) rearing habitat. Occupancy estimation based on eDNA samples indicated that 80% and 56% of previously unsampled sites classified as high or low IP (IP < 0.75), respectively, were occupied. The probability of detection (p) of Chinook Salmon DNA from three replicate water samples was high (p = 0.76) but varied with drainage area (km2). A power analysis indicated high power to detect proportional changes in occupancy based on parameter values estimated from eDNA occupancy models, although power curves were not symmetrical around zero, indicating greater power to detect positive than negative proportional changes in occupancy. Overall, the combination of IP habitat modeling and occupancy estimation provided a useful, rapid‐assessment method to predict and subsequently quantify the distribution of juvenile salmon in previously unsampled tributary habitats. Additionally, these methods are flexible and can be modified for application to other species and in other locations, which may contribute towards improved population monitoring and management.
Blowing Snow Sublimation and Transport over Antarctica from 11 Years of CALIPSO Observations
NASA Technical Reports Server (NTRS)
Palm, Stephen P.; Kayetha, Vinay; Yang, Yuekui; Pauly, Rebecca
2017-01-01
Blowing snow processes commonly occur over the earth's ice sheets when the 10 mile wind speed exceeds a threshold value. These processes play a key role in the sublimation and redistribution of snow thereby influencing the surface mass balance. Prior field studies and modeling results have shown the importance of blowing snow sublimation and transport on the surface mass budget and hydrological cycle of high-latitude regions. For the first time, we present continent-wide estimates of blowing snow sublimation and transport over Antarctica for the period 2006-2016 based on direct observation of blowing snow events. We use an improved version of the blowing snow detection algorithm developed for previous work that uses atmospheric backscatter measurements obtained from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) lidar aboard the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite. The blowing snow events identified by CALIPSO and meteorological fields from MERRA-2 are used to compute the blowing snow sublimation and transport rates. Our results show that maximum sublimation occurs along and slightly inland of the coastline. This is contrary to the observed maximum blowing snow frequency which occurs over the interior. The associated temperature and moisture reanalysis fields likely contribute to the spatial distribution of the maximum sublimation values. However, the spatial pattern of the sublimation rate over Antarctica is consistent with modeling studies and precipitation estimates. Overall, our results show that the 2006-2016 Antarctica average integrated blowing snow sublimation is about 393 +/- 196 Gt yr(exp -1), which is considerably larger than previous model-derived estimates. We find maximum blowing snow transport amount of 5 Mt km-1 yr(exp -1) over parts of East Antarctica and estimate that the average snow transport from continent to ocean is about 3.7 Gt yr(exp -1). These continent-wide estimates are the first of their kind and can be used to help model and constrain the surface mass budget over Antarctica.
Modeling risk of occupational zoonotic influenza infection in swine workers.
Paccha, Blanca; Jones, Rachael M; Gibbs, Shawn; Kane, Michael J; Torremorell, Montserrat; Neira-Ramirez, Victor; Rabinowitz, Peter M
2016-08-01
Zoonotic transmission of influenza A virus (IAV) between swine and workers in swine production facilities may play a role in the emergence of novel influenza strains with pandemic potential. Guidelines to prevent transmission of influenza to swine workers have been developed but there is a need for evidence-based decision-making about protective measures such as respiratory protection. A mathematical model was applied to estimate the risk of occupational IAV exposure to swine workers by contact and airborne transmission, and to evaluate the use of respirators to reduce transmission. The Markov model was used to simulate the transport and exposure of workers to IAV in a swine facility. A dose-response function was used to estimate the risk of infection. This approach is similar to methods previously used to estimate the risk of infection in human health care settings. This study uses concentration of virus in air from field measurements collected during outbreaks of influenza in commercial swine facilities, and analyzed by polymerase chain reaction. It was found that spending 25 min working in a barn during an influenza outbreak in a swine herd could be sufficient to cause zoonotic infection in a worker. However, this risk estimate was sensitive to estimates of viral infectivity to humans. Wearing an excellent fitting N95 respirator reduced this risk, but with high aerosol levels the predicted risk of infection remained high under certain assumptions. The results of this analysis indicate that under the conditions studied, swine workers are at risk of zoonotic influenza infection. The use of an N95 respirator could reduce such risk. These findings have implications for risk assessment and preventive programs targeting swine workers. The exact level of risk remains uncertain, since our model may have overestimated the viability or infectivity of IAV. Additionally, the potential for partial immunity in swine workers associated with repeated low-dose exposures or from previous infection with other influenza strains was not considered. Further studies should explore these uncertainties.
Krueger, Hans; Krueger, Joshua; Koot, Jacqueline
2015-04-30
Tobacco smoking, excess weight and physical inactivity contribute substantially to the preventable disease burden in Canada. The purpose of this paper is to determine the potential reduction in economic burden if all provinces achieved prevalence rates of these three risk factors (RFs) equivalent to those of the province with the lowest rates, and to update and address a limitation noted in our previous model. We used a previously developed approach based on population attributable fractions to estimate the economic burden associated with these RFs. Sex-specific relative risk and age-/sex-specific prevalence data were used in the modelling. The previous model was updated using the most current data for developing resource allocation weights. In 2012, the prevalence of tobacco smoking, excess weight and physical inactivity was the lowest in British Columbia. If age- and sex-specific prevalence rates from BC were applied to populations living in the other provinces, the annual economic burden attributable to these three RFs would be reduced by $5.3 billion. Updating the model resulted in a considerable shift in economic burden from smoking to excess weight, with the estimated annual economic burden attributable to excess weight now 25% higher compared to that of tobacco smoking ($23.3 vs. $18.7 billion). Achieving RF prevalence rates equivalent to those of the province with the lowest rates would result in a 10% reduction in economic burden attributable to excess weight, smoking and physical inactivity in Canada. This study shows that using current resource use data is important for this type of economic modelling.
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
Nilsen, Erlend B; Strand, Olav
2018-01-01
We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.
Models for Predicting the Biomass of Cunninghamialanceolata Trees and Stands in Southeastern China
Saeed, Sajjad
2017-01-01
Using existing equations to estimate the biomass of a single tree or a forest stand still involves large uncertainties. In this study, we developed individual-tree biomass models for Chinese Fir (Cunninghamia lanceolata.) stands in Fujian Province, southeast China, by using 74 previously established models that have been most commonly used to estimate tree biomass. We selected the best fit models and modified them. The results showed that the published model ln(B(Biomass)) = a + b * ln(D) + c * (ln(H))2 + d * (ln(H))3 + e * ln(WD) had the best fit for estimating the tree biomass of Chinese Fir stands. Furthermore, we observed that variables D(diameter at breast height), H (height), and WD(wood density)were significantly correlated with the total tree biomass estimation model. As a result, a natural logarithm structure gave the best estimates for the tree biomass structure. Finally, when a multi-step improvement on tree biomass model was performed, the tree biomass model with Tree volume(TV), WD and biomass wood density conversion factor (BECF),achieved the highest simulation accuracy, expressed as ln(TB) = −0.0703 + 0.9780 * ln(TV) + 0.0213 * ln(WD) + 1.0166 * ln(BECF). Therefore, when TV, WD and BECF were combined with tree biomass volume coefficient bi for Chinese Fir, the stand biomass (SB)model included both volume(SV) and coefficient bi variables of the stand as follows: bi = Exp(−0.0703+0.9780*ln(TV)+0.0213 * ln(WD)+1.0166*ln(BECF)). The stand biomass model is SB = SV/TV * bi. PMID:28095512
Models for Predicting the Biomass of Cunninghamialanceolata Trees and Stands in Southeastern China.
Guangyi, Mei; Yujun, Sun; Saeed, Sajjad
2017-01-01
Using existing equations to estimate the biomass of a single tree or a forest stand still involves large uncertainties. In this study, we developed individual-tree biomass models for Chinese Fir (Cunninghamia lanceolata.) stands in Fujian Province, southeast China, by using 74 previously established models that have been most commonly used to estimate tree biomass. We selected the best fit models and modified them. The results showed that the published model ln(B(Biomass)) = a + b * ln(D) + c * (ln(H))2 + d * (ln(H))3 + e * ln(WD) had the best fit for estimating the tree biomass of Chinese Fir stands. Furthermore, we observed that variables D(diameter at breast height), H (height), and WD(wood density)were significantly correlated with the total tree biomass estimation model. As a result, a natural logarithm structure gave the best estimates for the tree biomass structure. Finally, when a multi-step improvement on tree biomass model was performed, the tree biomass model with Tree volume(TV), WD and biomass wood density conversion factor (BECF),achieved the highest simulation accuracy, expressed as ln(TB) = -0.0703 + 0.9780 * ln(TV) + 0.0213 * ln(WD) + 1.0166 * ln(BECF). Therefore, when TV, WD and BECF were combined with tree biomass volume coefficient bi for Chinese Fir, the stand biomass (SB)model included both volume(SV) and coefficient bi variables of the stand as follows: bi = Exp(-0.0703+0.9780*ln(TV)+0.0213 * ln(WD)+1.0166*ln(BECF)). The stand biomass model is SB = SV/TV * bi.
A Priori Estimation of Organic Reaction Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.
2015-07-21
A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less
NASA Technical Reports Server (NTRS)
Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.
1985-01-01
Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.
Mitrikas, V G
2015-01-01
Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor.
Lewis, Joanna; Price, Malcolm J; Horner, Paddy J; White, Peter J
2017-07-15
Rigorous estimates for clearance rates of untreated chlamydia infections are important for understanding chlamydia epidemiology and designing control interventions, but were previously only available for women. We used data from published studies of chlamydia-infected men who were retested at a later date without having received treatment. Our analysis allowed new infections to take one of 1, 2, or 3 courses, each clearing at a different rate. We determined which of these 3 models had the most empirical support. The best-fitting model had 2 courses of infection in men, as was previously found for women: "slow-clearing" and "fast-clearing." Only 68% (57%-78%) (posterior median and 95% credible interval [CrI]) of incident infections in men were slow-clearing, vs 77% (69%-84%) in women. The slow clearance rate in men (based on 6 months' follow-up) was 0.35 (.05-1.15) year-1 (posterior median and 95% CrI), corresponding to mean infection duration 2.84 (.87-18.79) years. This compares to 1.35 (1.13-1.63) years in women. Our estimated clearance rate is slower than previously assumed. Fewer infections become established in men than women but once established, they clear more slowly. This study provides an improved description of chlamydia's natural history to inform public health decision making. We describe how further data collection could reduce uncertainty in estimates. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Random network model of electrical conduction in two-phase rock
NASA Astrophysics Data System (ADS)
Fuji-ta, Kiyoshi; Seki, Masayuki; Ichiki, Masahiro
2018-05-01
We developed a cell-type lattice model to clarify the interconnected conductivity mechanism of two-phase rock. We quantified electrical conduction networks in rock and evaluated electrical conductivity models of the two-phase interaction. Considering the existence ratio of conductive and resistive cells in the model, we generated natural matrix cells simulating a natural mineral distribution pattern, using Mersenne Twister random numbers. The most important and prominent feature of the model simulation is a drastic increase in the pseudo-conductivity index for conductor ratio R > 0.22. This index in the model increased from 10-4 to 100 between R = 0.22 and 0.9, a change of four orders of magnitude. We compared our model responses with results from previous model studies. Although the pseudo-conductivity computed by the model differs slightly from that of the previous model, model responses can account for the conductivity change. Our modeling is thus effective for quantitatively estimating the degree of interconnection of rock and minerals.
Estimating the production, consumption and export of cannabis: The Dutch case.
van der Giessen, Mark; van Ooyen-Houben, Marianne M J; Moolenaar, Debora E G
2016-05-01
Quantifying an illegal phenomenon like a drug market is inherently complex due to its hidden nature and the limited availability of reliable information. This article presents findings from a recent estimate of the production, consumption and export of Dutch cannabis and discusses the opportunities provided by, and limitations of, mathematical models for estimating the illegal cannabis market. The data collection consisted of a comprehensive literature study, secondary analyses on data from available registrations (2012-2014) and previous studies, and expert opinion. The cannabis market was quantified with several mathematical models. The data analysis included a Monte Carlo simulation to come to a 95% interval estimate (IE) and a sensitivity analysis to identify the most influential indicators. The annual production of Dutch cannabis was estimated to be between 171 and 965tons (95% IE of 271-613tons). The consumption was estimated to be between 28 and 119tons, depending on the inclusion or exclusion of non-residents (95% IE of 51-78tons or 32-49tons respectively). The export was estimated to be between 53 and 937tons (95% IE of 206-549tons or 231-573tons, respectively). Mathematical models are valuable tools for the systematic assessment of the size of illegal markets and determining the uncertainty inherent in the estimates. The estimates required the use of many assumptions and the availability of reliable indicators was limited. This uncertainty is reflected in the wide ranges of the estimates. The estimates are sensitive to 10 of the 45 indicators. These 10 account for 86-93% of the variation found. Further research should focus on improving the variables and the independence of the mathematical models. Copyright © 2016 Elsevier B.V. All rights reserved.
Revised history of the great Kanto earthquakes from the ages of Holocene marine terraces
NASA Astrophysics Data System (ADS)
Komori, J.; Shishikura, M.; Ando, R.; Yokoyama, Y.; Miyairi, Y.
2017-12-01
The Sagami Trough, where the Philippine Sea Plate subducts beneath the continental plate of the Japanese Archipelago, is considered to be a typical example where characteristic earthquakes repeat. Previous studies in some decades claimed that approximate M8 interplate earthquakes, called Genroku type earthquakes, occur with the interval of 2,000―2,700 years. In this study, we measured the emergence ages of the marine terraces in the Chikura lowland, which lies to the southeast of the Boso Peninsula, to reevaluate the history of the great earthquake over the past 10,000 years. The dates of the marine terraces are measured via radiocarbon dating of shell fossils obtained from the marine deposits. The sampling method employed in this study collects core samples using a dense and systematic drilling survey, which increased the reliability when correlating shell fossils with marine terraces. Moreover, we explored the surface profiles of the terraces with detailed digital elevation model (DEM) data obtained using LiDAR. The maximum emergence ages of the marine terraces were dated at 6300 cal yBP, 3000 cal yBP, and 2200 cal yBP from the top terrace excepting the lowest terrace (which was estimated at AD1703). In addition, another previously unrecognized terrace was detected between the highest and the second terrace in both the dating and the geomorphological analyses and was dated at 5800 cal yBP. The newly obtained ages are nearly a thousand of years younger than previously estimated ages; consequently, the intervals of the great earthquakes that occurred along the Sagami Trough are estimated to be much shorter and more varied than those of previous estimations. This result indicates that the temporal pattern of the large earthquakes occur along the Sagami Trough does not follow a characteristic pattern as previously considered.
NASA Astrophysics Data System (ADS)
Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik
2017-01-01
Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.
NASA Astrophysics Data System (ADS)
Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen
2016-08-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.
Ozone Production and Control Strategies for Southern Taiwan
NASA Astrophysics Data System (ADS)
Shiu, C.; Liu, S.; Chang, C.; Chen, J.; Chou, C. C.; Lin, C.
2006-12-01
An observation-based modeling (OBM) approach is used to estimate the ozone production efficiency and production rate of O3 (P(O3)) in southern Taiwan. The approach can also provide an indirect estimate of the concentration of OH. Measured concentrations of two aromatic hydrocarbons, i.e. ethylbenzene/m,p-xylene, are used to estimate the degree of photochemical processing and the amounts of photochemically consumed NOx and NMHCs. In addition, a one-dimensional (1d) photochemical model is used to compare with the OBM results. The average ozone production efficiency during the field campaign in Kaohsiung-Pingtung area in Fall 2003 is found to be about 5, comparable to previous works. The relationship of P(O3) with NOx is examined in detail and compared to previous studies. The derived OH concentrations from this approach are in fair agreement with values calculated from the 1d photochemical model. The relationship of total oxidants (e.g. O3+NO2) versus initial NOx and NMHCs suggests that reducing NMHCs are more effective in controlling total oxidants than reducing NOx. For O3 control, reducing NMHC is even more effective than NOx due to the NO titration effect. This observation-based approach provides a good alternative for understanding the production of ozone and formulating ozone control strategy in urban and suburban environment without measurements of peroxy radicals.
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
Krogh, Magnus Reinsfelt; Nghiem, Giang M; Halvorsen, Per Steinar; Elle, Ole Jakob; Grymyr, Ole-Johannes; Hoff, Lars; Remme, Espen W
2017-05-01
A miniaturized accelerometer fixed to the heart can be used for monitoring of cardiac function. However, an accelerometer cannot differentiate between acceleration caused by motion and acceleration due to gravity. The accuracy of motion measurements is therefore dependent on how well the gravity component can be estimated and filtered from the measured signal. In this study we propose a new method for estimating the gravity, based on strapdown inertial navigation, using a combined accelerometer and gyro. The gyro was used to estimate the orientation of the gravity field and thereby remove it. We compared this method with two previously proposed gravity filtering methods in three experimental models using: (1) in silico computer simulated heart motion; (2) robot mimicked heart motion; and (3) in vivo measured motion on the heart in an animal model. The new method correlated excellently with the reference (r 2 > 0.93) and had a deviation from reference peak systolic displacement (6.3 ± 3.9 mm) below 0.2 ± 0.5 mm for the robot experiment model. The new method performed significantly better than the two previously proposed methods (p < 0.001). The results show that the proposed method using gyro can measure cardiac motion with high accuracy and performs better than existing methods for filtering the gravity component from the accelerometer signal.
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Cross-seasonal effects on waterfowl productivity: Implications under climate change
Osnas, Erik; Zhao, Qing; Runge, Michael C.; Boomer, G Scott
2016-01-01
Previous efforts to relate winter-ground precipitation to subsequent reproductive success as measured by the ratio of juveniles to adults in the autumn failed to account for increased vulnerability of juvenile ducks to hunting and uncertainty in the estimated age ratio. Neglecting increased juvenile vulnerability will positively bias the mean productivity estimate, and neglecting increased vulnerability and estimation uncertainty will positively bias the year-to-year variance in productivity because raw age ratios are the product of sampling variation, the year-specific vulnerability, and year-specific reproductive success. Therefore, we estimated the effects of cumulative winter precipitation in the California Central Valley and the Mississippi Alluvial Valley on pintail (Anas acuta) and mallard (Anas platyrhnchos) reproduction, respectively, using hierarchical Bayesian methods to correct for sampling bias in productivity estimates and observation error in covariates. We applied the model to a hunter-collected parts survey implemented by the United States Fish and Wildlife Service and band recoveries reported to the United States Geological Survey Bird Banding Laboratory using data from 1961 to 2013. We compared our results to previous estimates that used simple linear regression on uncorrected age ratios from a smaller subset of years in pintail (1961–1985). Like previous analyses, we found large and consistent effects of population size and wetland conditions in prairie Canada on mallard productivity, and large effects of population size and mean latitude of the observed breeding population on pintail productivity. Unlike previous analyses, we report a large amount of uncertainty in the estimated effects of wintering-ground precipitation on pintail and mallard productivity, with considerable uncertainty in the sign of the estimated main effect, although the posterior medians of precipitation effects were consistent with past studies. We found more consistent estimates in the sign of an interaction effect between population size and precipitation, suggesting that wintering-ground precipitation has a larger effect in years of high population size, especially for pintail. When we used the estimated effects in a population model to derive a sustainable harvest and population size projection (i.e., a yield curve), there was considerable uncertainty in the effect of increased or decreased wintering-ground precipitation on sustainable harvest potential and population size. These results suggest that the mechanism of cross-seasonal effects between winter habitat and reproduction in ducks occurs through a reduction in the strength of density dependence in years of above-average wintering-ground precipitation. We suggest additional investigation of the underlying mechanisms and that habitat managers and decision-makers consider the level of uncertainty in these estimates when attempting to integrate habitat management and harvest management decisions. Collection of annual data on the status of wintering-ground habitat in a rigorous sampling framework would likely be the most direct way to improve understanding of mechanisms and inform management.
TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases
1990-09-01
IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version
Fluorine Abundances in AGB Carbon Stars: New Results?
NASA Astrophysics Data System (ADS)
Abia, C.; de Laverny, P.; Recio-Blanco, A.; Domínguez, I.; Cristallo, S.; Straniero, O.
2009-09-01
A recent reanalysis of the fluorine abundance in three Galactic Asymptotic Giant Branch (AGB) carbon stars (TX Psc, AQ Sgr and R Scl) by Abia et al. (2009) results in estimates of fluorine abundances systematically lower by ~0.8 dex on average, with respect to the sole previous estimates by Jorissen, Smith & Lambert (1992). The new F abundances are in better agreement with the predictions of full-network stellar models of low-mass (<3 Msolar) AGB stars.
McGrath, Timothy; Fineman, Richard; Stirling, Leia
2018-06-08
Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.
Feng, Yuan; Lee, Chung-Hao; Sun, Lining; Ji, Songbai; Zhao, Xuefeng
2017-01-01
Characterizing the mechanical properties of white matter is important to understand and model brain development and injury. With embedded aligned axonal fibers, white matter is typically modeled as a transversely isotropic material. However, most studies characterize the white matter tissue using models with a single anisotropic invariant or in a small-strain regime. In this study, we combined a single experimental procedure - asymmetric indentation - with inverse finite element (FE) modeling to estimate the nearly incompressible transversely isotropic material parameters of white matter. A minimal form comprising three parameters was employed to simulate indentation responses in the large-strain regime. The parameters were estimated using a global optimization procedure based on a genetic algorithm (GA). Experimental data from two indentation configurations of porcine white matter, parallel and perpendicular to the axonal fiber direction, were utilized to estimate model parameters. Results in this study confirmed a strong mechanical anisotropy of white matter in large strain. Further, our results suggested that both indentation configurations are needed to estimate the parameters with sufficient accuracy, and that the indenter-sample friction is important. Finally, we also showed that the estimated parameters were consistent with those previously obtained via a trial-and-error forward FE method in the small-strain regime. These findings are useful in modeling and parameterization of white matter, especially under large deformation, and demonstrate the potential of the proposed asymmetric indentation technique to characterize other soft biological tissues with transversely isotropic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Carbon Fluxes at the AmazonFACE Research Site
NASA Astrophysics Data System (ADS)
Norby, R.; De Araujo, A. C.; Cordeiro, A. L.; Fleischer, K.; Fuchslueger, L.; Garcia, S.; Hofhansl, F.; Garcia, M. N.; Grandis, A.; Oblitas, E.; Pereira, I.; Pieres, N. M.; Schaap, K.; Valverde-Barrantes, O.
2017-12-01
The free-air CO2 enrichment (FACE) experiment to be implemented in the Amazon rain forest requires strong pretreatment characterization so that eventual responses to elevated CO2 can be detected against a background of substantial species diversity and spatial heterogeneity. Two 30-m diameter plots have been laid out for initial characterization in a 30-m tall, old-growth, terra firme forest. Intensive measurements have been made of aboveground tree growth, leaf area, litter production, and fine-root production; these data sets together support initial estimates of plot-scale net primary productivity (NPP). Leaf-level measurements of photosynthesis throughout the canopy and over a daily time course in both the wet and dry season, coupled with meterological monitoring, support an initial estimate of gross primary productivity (GPP) and carbon-use efficiency (CUE = NPP/GPP). Monthly monitoring of CO2 efflux from the soil, partitioned into autotrophic and heterotrophic components, supports an estimate of net ecosystem production (NEP). Our estimate of NPP in the two plots (1.2 and 1.4 kg C m-2 yr-1) is 16-38% greater than previously reported for the site, primarily due to our more complete documentation of fine-root production, including root production deeper than 30 cm. The estimate of CUE of the ecosystem (0.52) is greater than most others in Amazonia; this discrepancy reflects large uncertainty in GPP, which derived from just two days of measurement, or to underestimates of the fine-root component of NPP in previous studies. Estimates of NEP (0 and 0.14 kg C m-2 yr-1) are generally consistent with a landscape-level estimate from flux tower data. Our C flux estimates, albeit very preliminary, provide initial benchmarks for a 12-model a priori evaluation of this forest. The model means of GPP, NPP, and NEP are mostly consistent with our field measurements. Predictions of C flux responses to elevated CO2 from the models become hypotheses to be tested in the FACE experiment. Although carbon fluxes on small plots cannot be expected to represent the fluxes across the wider and more diverse region, our integrated measurements, coupled with a model framework, provide a strong foundation for understanding the mechanistic basis of responses and for extending results of experimental CO2 fertilization to the wider region.
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo
2009-01-01
In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2
Hepatic biotransformation is an important determinant of chemical bioaccumulation in fish. Consequently, improvements to bioaccumulation models can be made using estimates of chemical biotransformation rates. Cryopreserved trout hepatocytes have previously been used to measure ...
NASA Technical Reports Server (NTRS)
Garvin, J. B.; Frawley, J. J.; Sakimoto, S. E. H.; Schnetzler, C.
2000-01-01
Global geometric characteristics of topographically fresh impact craters have been assessed, for the first time, from gridded MOLA topography. Global trends of properties such as depth/diameter differ from previous estimates. Regional differences are observed.
DOT National Transportation Integrated Search
1996-03-01
A heat transfer model, previously developed to estimate wheel rim temperatures during tread braking of MU power cars and validated by comparison with operational test results, is extended and appplied to cases involving several different blended brak...
Despite substantial attention toward environmental tobacco smoke (ETS) exposure, previous studies have not provided adequate information to apply broadly within community-scale risk assessments. We aim to estimate residential concentrations of particulate matter (PM) from ETS in ...
GTAP model and database modification to better handle cropping changes on the intensive margin.
DOT National Transportation Integrated Search
2017-07-25
Previously, induced land use change estimates considered only the extensive margin; that is, adding to harvested area by converting forest or pasture to cropland. However, recent data suggests that some of the increase in global harvested area comes ...
Social Interactions under Incomplete Information: Games, Equilibria, and Expectations
NASA Astrophysics Data System (ADS)
Yang, Chao
My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.
Larson, David B; Chen, Matthew C; Lungren, Matthew P; Halabi, Safwan S; Stence, Nicholas V; Langlotz, Curtis P
2018-04-01
Purpose To compare the performance of a deep-learning bone age assessment model based on hand radiographs with that of expert radiologists and that of existing automated models. Materials and Methods The institutional review board approved the study. A total of 14 036 clinical hand radiographs and corresponding reports were obtained from two children's hospitals to train and validate the model. For the first test set, composed of 200 examinations, the mean of bone age estimates from the clinical report and three additional human reviewers was used as the reference standard. Overall model performance was assessed by comparing the root mean square (RMS) and mean absolute difference (MAD) between the model estimates and the reference standard bone ages. Ninety-five percent limits of agreement were calculated in a pairwise fashion for all reviewers and the model. The RMS of a second test set composed of 913 examinations from the publicly available Digital Hand Atlas was compared with published reports of an existing automated model. Results The mean difference between bone age estimates of the model and of the reviewers was 0 years, with a mean RMS and MAD of 0.63 and 0.50 years, respectively. The estimates of the model, the clinical report, and the three reviewers were within the 95% limits of agreement. RMS for the Digital Hand Atlas data set was 0.73 years, compared with 0.61 years of a previously reported model. Conclusion A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of existing automated models. © RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on January 19, 2018.
Estimating aboveground live understory vegetation carbon in the United States
NASA Astrophysics Data System (ADS)
Johnson, Kristofer D.; Domke, Grant M.; Russell, Matthew B.; Walters, Brian; Hom, John; Peduzzi, Alicia; Birdsey, Richard; Dolan, Katelyn; Huang, Wenli
2017-12-01
Despite the key role that understory vegetation plays in ecosystems and the terrestrial carbon cycle, it is often overlooked and has few quantitative measurements, especially at national scales. To understand the contribution of understory carbon to the United States (US) carbon budget, we developed an approach that relies on field measurements of understory vegetation cover and height on US Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) subplots. Allometric models were developed to estimate aboveground understory carbon. A spatial model based on stand characteristics and remotely sensed data was also applied to estimate understory carbon on all FIA plots. We found that most understory carbon was comprised of woody shrub species (64%), followed by nonwoody forbs and graminoid species (35%) and seedlings (1%). The largest estimates were found in temperate or warm humid locations such as the Pacific Northwest and southeastern US, thus following the same broad trend as aboveground tree biomass. The average understory aboveground carbon density was estimated to be 0.977 Mg ha-1, for a total estimate of 272 Tg carbon across all managed forest land in the US (approximately 2% of the total aboveground live tree carbon pool). This estimate is more than twice as low as previous FIA modeled estimates that did not rely on understory measurements, suggesting that this pool may currently be overestimated in US National Greenhouse Gas reporting.
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-01-01
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-09-05
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.
The impact of strain-specific immunity on Lyme disease incidence is spatially heterogeneous.
Khatchikian, Camilo E; Nadelman, Robert B; Nowakowski, John; Schwartz, Ira; Wormser, Gary P; Brisson, Dustin
2017-12-01
Lyme disease, caused by the bacterium Borrelia burgdorferi, is the most common tick-borne infection in the US. Recent studies have demonstrated that the incidence of human Lyme disease would have been even greater were it not for the presence of strain-specific immunity, which protects previously infected patients against subsequent infections by the same B. burgdorferi strain. Here, spatial heterogeneity is incorporated into epidemiological models to accurately estimate the impact of strain-specific immunity on human Lyme disease incidence. The estimated reduction in the number of Lyme disease cases is greater in epidemiologic models that explicitly include the spatial distribution of Lyme disease cases reported at the county level than those that utilize nationwide data. strain-specific immunity has the greatest epidemiologic impact in geographic areas with the highest Lyme disease incidence due to the greater proportion of people that have been previously infected and have developed strain-specific immunity. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimating soil matric potential in Owens Valley, California
Sorenson, Stephen K.; Miller, Reuben F.; Welch, Michael R.; Groeneveld, David P.; Branson, Farrel A.
1989-01-01
Much of the floor of Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first, the filter-paper method, uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The previously published calibration relations used to estimate soil matric potential from the water content of the filter papers were modified on the basis of current laboratory data. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base-10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. The slope and intercepts of this function vary with the texture and saturation capacity of the soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1-m depth intervals derived by using the hand auger and filter-paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter-paper method could be obtained 90 to 95 percent of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Czerepicki, A.; Koniak, M.
2017-06-01
The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010.
Jacobs, David E; Nevin, Rick
2006-11-01
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 million pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels > or =10 microg/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, David E.; Nevin, Rick
2006-11-15
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 millionmore » pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels {>=}10 {mu}g/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.« less
Water Vapor Winds and Their Application to Climate Change Studies
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Lerner, Jeffrey A.
2000-01-01
The retrieval of satellite-derived winds and moisture from geostationary water vapor imagery has matured to the point where it may be applied to better understanding longer term climate changes that were previously not possible using conventional measurements or model analysis in data-sparse regions. In this paper, upper-tropospheric circulation features and moisture transport covering ENSO periods are presented and discussed. Precursors and other detectable interannual climate change signals are analyzed and compared to model diagnosed features. Estimates of winds and humidity over data-rich regions are used to show the robustness of the data and its value over regions that have previously eluded measurement.
NASA Technical Reports Server (NTRS)
Markson, R.; Anderson, B.; Govaert, J.; Fairall, C. W.
1989-01-01
A novel coronal current-determining instrument is being used at NASA-KSC which overcomes previous difficulties with wind sensitivity and a voltage-threshold 'deadband'. The mounting of the corona needle at an elevated location reduces coronal and electrode layer space-charge influences on electric fields, rendering the measurement of space charge density possible. In conjunction with a space-charge compensation model, these features allow a more realistic estimation of cloud base electric fields and the potential for lightning strike than has previously been possible with ground-based sensors.
NASA Astrophysics Data System (ADS)
Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.
2018-01-01
Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
Rønjom, Marianne F; Brink, Carsten; Lorenzen, Ebbe L; Hegedüs, Laszlo; Johansen, Jørgen
2015-01-01
To examine the variations of risk-estimates of radiation-induced hypothyroidism (HT) from our previously developed normal tissue complication probability (NTCP) model in patients with head and neck squamous cell carcinoma (HNSCC) in relation to variability of delineation of the thyroid gland. In a previous study for development of an NTCP model for HT, the thyroid gland was delineated in 246 treatment plans of patients with HNSCC. Fifty of these plans were randomly chosen for re-delineation for a study of the intra- and inter-observer variability of thyroid volume, Dmean and estimated risk of HT. Bland-Altman plots were used for assessment of the systematic (mean) and random [standard deviation (SD)] variability of the three parameters, and a method for displaying the spatial variation in delineation differences was developed. Intra-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter-observer variations were insignificantly small, -0.4% (SD ± 6.0) and -0.7% (SD ± 4.8), respectively, but as the SDs show, for some patients the difference in estimated NTCP was large. For the entire study population, the variation in predicted risk of radiation-induced HT in head and neck cancer was small and our NTCP model was robust against observer variations in delineation of the thyroid gland. However, for the individual patient, there may be large differences in estimated risk which calls for precise delineation of the thyroid gland to obtain correct dose and NTCP estimates for optimized treatment planning in the individual patient.
ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn
It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such asmore » limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.« less
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.